News Intel's NPU Acceleration Library goes open source — Meteor Lake CPUs can now run TinyLlama and other lightweight LLMs

Status
Not open for further replies.

bit_user

Titan
Ambassador
This seems like just wrappers needed to use the NPU from PyTorch. While that's great, the project doesn't contain the parts that really interest me, such as the code for the "SHAVE" DSP cores.

W2X2KdaPz4kxihAbwBEvuK.jpg


Source: https://www.tomshardware.com/news/i...meteor-lake-architecture-launches-december-14
 
Last edited:

jlake3

Distinguished
Jul 9, 2014
132
189
18,760
Of course, since the NPU Acceleration Library is made for developers and not ordinary users, it's not a simple task to use it for your purposes.
...so is there no standardized, OS-level framework for all of this new "AI PC"/"AI Laptop" marketing hype to actually tie into? No way for the hardware to tell the OS "Hey, I'm an NPU and here's the data types, standards, and the version levels of those I support"?

I remember hearing that XDNA suffered from a lack of compatible software, the article about the Snapdragon X Elite laptop from the other day mentioned some special setup and requiring the use of the Qualcomm AI stack, and it seems like Intel NPUs as well are something where an average user will need the developer to build support into the program for them.

I'm admittedly not on the generative AI hype train, but there are some uses where I (or people I help with PC things) could be interested in some AI-assisted photo-retouching or webcam background removal that runs on a special accelerator for better performance at lower power (if it's not adding too much to the cost of the PC), and it seems that if I buy something marked for "AI" and touting how many TFLOPS it's NPU can do... there's no guarantee things will actually be able to utilize it now or continue to support it in the future.
 

bit_user

Titan
Ambassador
...so is there no standardized, OS-level framework for all of this new "AI PC"/"AI Laptop" marketing hype to actually tie into? No way for the hardware to tell the OS "Hey, I'm an NPU and here's the data types, standards, and the version levels of those I support"?
I'm not familiar with it, but I guess DirectML might be that, for doing inference on Windows. This looks like a good place to start:


it seems that if I buy something marked for "AI" and touting how many TFLOPS it's NPU can do... there's no guarantee things will actually be able to utilize it now or continue to support it in the future.
Check the hardware requirements of the software you want to be AI-accelerated.

IMO, it's really a lot like the situation we have with GPUs, where you need to check not only for HW requirements but even then you need to look for benchmarks to know how well the app actually runs on a given hardware spec. You cannot simply assume that TOPS translates directly into AI performance, any more than you could assume that the TFLOPS and GB/s of a graphics card predicted game performance. Yes, there's a correlation, but also quite a lot of variation.
 

CSMajor

Prominent
Jan 2, 2023
3
7
515
...so is there no standardized, OS-level framework for all of this new "AI PC"/"AI Laptop" marketing hype to actually tie into? No way for the hardware to tell the OS "Hey, I'm an NPU and here's the data types, standards, and the version levels of those I support"?

I remember hearing that XDNA suffered from a lack of compatible software, the article about the Snapdragon X Elite laptop from the other day mentioned some special setup and requiring the use of the Qualcomm AI stack, and it seems like Intel NPUs as well are something where an average user will need the developer to build support into the program for them.

I'm admittedly not on the generative AI hype train, but there are some uses where I (or people I help with PC things) could be interested in some AI-assisted photo-retouching or webcam background removal that runs on a special accelerator for better performance at lower power (if it's not adding too much to the cost of the PC), and it seems that if I buy something marked for "AI" and touting how many TFLOPS it's NPU can do... there's no guarantee things will actually be able to utilize it now or continue to support it in the future.
Of course, first you develop and test your model on NPU then you would deploy it via something like DirectML. To do that you could use something like OpenVINO or DirectML. Both which are standards/standardized.
 
  • Like
Reactions: jlake3
Status
Not open for further replies.