AMD wonders if users need Ryzen AI support under Linux.
AMD Asks: Do You Need Ryzen AI Support in Linux? : Read more
AMD Asks: Do You Need Ryzen AI Support in Linux? : Read more
Good summary. The key point is that it's not faster than the GPU. AMD actually told us how fast it is:AMD has designed the Ryzen XDNA AI engine for less demanding AI inference tasks like audio, photo, and video processing. Its goal is to provide quicker response times compared to online services, and it is also more energy-efficient compared to solutions based on CPUs or GPUs. The engine has the capacity to manage up to four simultaneous AI streams, and it can process INT8 and bfloat16 instructions.
Feel free to mention that on the github issue (if you haven't already):Agreed. Linux support should be a no brainer.
Well, I don't know how much power it uses, but Chips & Cheese pieced together some details from their Hot Chips presentation + details Xilinx has published about them. Scroll about half way down this page:It's very hard to say, when there seem to be no details on
- functional scope: what can that NPU do?
- comparative computing power: how does it compare to say a Hexagon DSP/NPU?
- power/performance/usability: how much power will it require at which type of ML workload, can it be regulated to meet latency expectations?
- integration with the rest of the SoC: how much CPU collaboration/wake-up will be required e.g to do voice recognition and command processsing?
Support as in what? Their prepackaged drivers? You'll still have the open source driver supporting them. It's not as if ROCm ever supported them, so no loss there.I can tell AMD what I don't want to go and that is iGPU support for my Cezanne APUs, which are getting awfully close to being jettisoned as near all GCN GPUs loose Linux driver support these days: two years of support is quite ridiculous for an APUs iGPU which you can't swap out!
In response to this post.Support as in what? Their prepackaged drivers? You'll still have the open source driver supporting them. It's not as if ROCm ever supported them, so no loss there.
I might agree with you, if I better understood what you're talking about.
Huh. Why didn't you just link the phoronix post?In response to this post.
AMDVLK is a userspace component, thus probably not involved in power management. Just use RADV and you'll be fine.And in the case of Linux GPU drivers, it's mostly the power management features for SoCs that are important to me. And that's one area where open source drivers have suffered from lack of technical vendor support. Again, I'm not trying to game on notebooks, but trying to remain productive during travel or where I can't keep people staring at their mobile from stepping on my power line.
Yes, I've mentioned it in other threads.First off, I will comment, although Intel now says the.. 14th gen? Intel CPUs are the first with an AI accelerator, I have an accelerator of some sort in my 11th gen CPU. It's VERY limited, I think in Windows it's used to clean up mic input.
AMD is a Windows-oriented company. It's becoming less so, but their stuff has generally had better support under Windows than Linux.1) I find it EXTRAORDINARILY odd that AMD would even mull having Windows support and no Linux support,
Those data formats are fine for most inference acceleration, which is the point of these integrated NPUs. If you want to do training, you really need to use a dGPU or bigger.2) That said, with int8 and float16 support, it may be rather limited for people thinking they can just fire up tensorflow and run whatever on it.