News AMD Asks: Do You Need Ryzen AI Support in Linux?

Status
Not open for further replies.

bit_user

Titan
Ambassador
Since a lot of AI development happens under Linux (dare I say even the bulk of it?), this would be a good move for developer engagement and mindshare, if nothing else.

AMD has designed the Ryzen XDNA AI engine for less demanding AI inference tasks like audio, photo, and video processing. Its goal is to provide quicker response times compared to online services, and it is also more energy-efficient compared to solutions based on CPUs or GPUs. The engine has the capacity to manage up to four simultaneous AI streams, and it can process INT8 and bfloat16 instructions.
Good summary. The key point is that it's not faster than the GPU. AMD actually told us how fast it is:
  • Ryzen AI can reportedly do up to 5 TFLOPS (BF16)
  • 780M iGPU reportedly good for 8.6 TFLOPS (FP16).

So, this AI accelerator isn't going to be a game changer for generative AI. It's really about extending battery life, when using AI for things like video conferencing or video playback.
 

tracker1

Distinguished
Jan 15, 2010
42
25
18,535
tracker1.dev
Linux is definitely necessary. Almost nobody is targeting server loads under windows for this kind of work. It's all Linux there.

I'd go a half step farther and say that dev ex should have parity with WSL as well as Linux hosts. Since a lot of projects resetting Linux are done in Windows with WSL.
 

abufrejoval

Reputable
Jun 19, 2020
615
452
5,260
It's very hard to say, when there seem to be no details on
  • functional scope: what can that NPU do?
  • comparative computing power: how does it compare to say a Hexagon DSP/NPU?
  • power/performance/usability: how much power will it require at which type of ML workload, can it be regulated to meet latency expectations?
  • integration with the rest of the SoC: how much CPU collaboration/wake-up will be required e.g to do voice recognition and command processsing?
So far all I see is something that seems more designed for the Microsoft, Metas and Googles of this world, who need a more energy efficient and intransparent way to snoop on people who just bought an operating system and got an Internet giant back-door instead.

And I certainly don't want that on any OS: my personal computer is there to serve me, exclusively.

I can tell AMD what I don't want to go and that is iGPU support for my Cezanne APUs, which are getting awfully close to being jettisoned as near all GCN GPUs loose Linux driver support these days: two years of support is quite ridiculous for an APUs iGPU which you can't swap out!

And it's not the first time AMD hardware still sold as new went out of driver support...

If you want success in mobile devices, that's not how to do it!
 

bit_user

Titan
Ambassador
It's very hard to say, when there seem to be no details on
  • functional scope: what can that NPU do?
  • comparative computing power: how does it compare to say a Hexagon DSP/NPU?
  • power/performance/usability: how much power will it require at which type of ML workload, can it be regulated to meet latency expectations?
  • integration with the rest of the SoC: how much CPU collaboration/wake-up will be required e.g to do voice recognition and command processsing?
Well, I don't know how much power it uses, but Chips & Cheese pieced together some details from their Hot Chips presentation + details Xilinx has published about them. Scroll about half way down this page:


I can tell AMD what I don't want to go and that is iGPU support for my Cezanne APUs, which are getting awfully close to being jettisoned as near all GCN GPUs loose Linux driver support these days: two years of support is quite ridiculous for an APUs iGPU which you can't swap out!
Support as in what? Their prepackaged drivers? You'll still have the open source driver supporting them. It's not as if ROCm ever supported them, so no loss there.

I might agree with you, if I better understood what you're talking about.
 

bit_user

Titan
Ambassador
In response to this post.
Huh. Why didn't you just link the phoronix post?
As that article says:

" You can either stick to AMDVLK 2023.Q3 and older or the better option is to simply use the community-maintained Mesa RADV driver."
So, I think it's a bit of a non-issue. I don't quite know why AMD even keeps maintaining AMDVLK, but I guess they just like to have something they control, so they can theoretically be sure it provides the best support for new hardware at launch & supports all the latest & greatest features of their hardware, without having to negotiate or compromise anything with the Mesa developers.


The best thing about your link is this pic, which is just like... weird. As if they gave it to her to pose with, yet she doesn't really know quite what it is.

Forspoken-x-AMD-Radeon-Polaris.jpg

 

abufrejoval

Reputable
Jun 19, 2020
615
452
5,260
El Chapuzas is often only a copycat, but he's very early, finds a lot of truffels before they appear elsewhere, so I tend to go there before I go to the more serious sites like Phoronix...

That picture is showing all signs of having been generated, most likely not by that GPU, though.

But even if the article is about Linux drivers, it hints at Windows not being far behind. And that is something I've come across before, where AMD has dropped driver support for a generation of GPUs while APUs still selling as new had them inside (e.g. Richmond).

After all it makes a lot more sense to cut driver support on a hardware generation level than on a fixed time period. And that has Cezanne with lots of active products right inside the trailing edge of GCN.

Now, I'm not worried that much over lack of support for the latest game titles in iGPUs, but when it comes to security patches, lack of ongoing driver support essentially kills a platform for business use.

And in the case of Linux GPU drivers, it's mostly the power management features for SoCs that are important to me. And that's one area where open source drivers have suffered from lack of technical vendor support. Again, I'm not trying to game on notebooks, but trying to remain productive during travel or where I can't keep people staring at their mobile from stepping on my power line.
 
Last edited:

bit_user

Titan
Ambassador
And in the case of Linux GPU drivers, it's mostly the power management features for SoCs that are important to me. And that's one area where open source drivers have suffered from lack of technical vendor support. Again, I'm not trying to game on notebooks, but trying to remain productive during travel or where I can't keep people staring at their mobile from stepping on my power line.
AMDVLK is a userspace component, thus probably not involved in power management. Just use RADV and you'll be fine.
 
  • Like
Reactions: abufrejoval

hwertz

Commendable
Nov 24, 2022
16
6
1,515
First off, I will comment, although Intel now says the.. 14th gen? Intel CPUs are the first with an AI accelerator, I have an accelerator of some sort in my 11th gen CPU. It's VERY limited, I think in Windows it's used to clean up mic input. It's got full Linux support, but due to the limitations on input format (1D data only, and a small size at that) and data formats (maybe int8 only?) I found when I was dicking around with tensorflow and such that it was not actually useful for anything for my use.

So...
1) I find it EXTRAORDINARILY odd that AMD would even mull having Windows support and no Linux support, as others have said SO MUCH AI stuff is done in Linux. I mean, Nvidia supports both but it's clear the premier environment is Linux.

2) That said, with int8 and float16 support, it may be rather limited for people thinking they can just fire up tensorflow and run whatever on it. It sounds like it's less limited than the thing I have in my 11th gen Intel system, but still I'm not sure how many tensorflow tools will work with only int8 and float16 data types.
 

bit_user

Titan
Ambassador
First off, I will comment, although Intel now says the.. 14th gen? Intel CPUs are the first with an AI accelerator, I have an accelerator of some sort in my 11th gen CPU. It's VERY limited, I think in Windows it's used to clean up mic input.
Yes, I've mentioned it in other threads.


As you mentioned, it's not very interesting, due to the small amount of compute power it provides. The main benefit it provides is improved energy efficiency, by offloading things like audio processing or presence detection from the CPU or GPU.

1) I find it EXTRAORDINARILY odd that AMD would even mull having Windows support and no Linux support,
AMD is a Windows-oriented company. It's becoming less so, but their stuff has generally had better support under Windows than Linux.

2) That said, with int8 and float16 support, it may be rather limited for people thinking they can just fire up tensorflow and run whatever on it.
Those data formats are fine for most inference acceleration, which is the point of these integrated NPUs. If you want to do training, you really need to use a dGPU or bigger.
 
Status
Not open for further replies.