News Imagination shifts AI strategy and abandons NPUs — company secures $100M in financing

Status
Not open for further replies.

bit_user

Titan
Ambassador
I always preferred to see phone SoC's double down on iGPUs as a way of increasing AI performance than to integrate special-purpose NPUs that can't be used for anything else.

I think the main disadvantage of GPUs might be that their data movement primitives are mostly DRAM-oriented and not tile-to-tile, like how a neural network prefers to operate. This loses data locality and ends up making GPUs less efficient.
 

AkroZ

Reputable
Aug 9, 2021
44
26
4,560
I always preferred to see phone SoC's double down on iGPUs as a way of increasing AI performance than to integrate special-purpose NPUs that can't be used for anything else.

I think the main disadvantage of GPUs might be that their data movement primitives are mostly DRAM-oriented and not tile-to-tile, like how a neural network prefers to operate. This loses data locality and ends up making GPUs less efficient.
From my knowledge there is no "tile-to-tile" instruction on NPU, to make that possible you will have to interconnect every tile with each other.
The architecture of a NPU is the same than a GPGPU (just watered down), the load/store instructions work on L2 or DDR (or sometimes TCM like Qualcomm).
Compared to GPU, NPU currently have smaller operations sets (more density) and bigger L2 cache.
It's why NPU are contested, there selling point is that it's more efficient for AI but it's not like it is used 24h/24 and for softwares it's a nightmare to have code that can run on all Processing Units for compatibility.
So only the builders make some functionalities use it, the developpers are not interested, they use the CPU or GPU if they need a neural network.
 

bit_user

Titan
Ambassador
From my knowledge there is no "tile-to-tile" instruction on NPU, to make that possible you will have to interconnect every tile with each other.
I'm sure they have DMA engines. Data access tends to follow more regular and predictable patterns than in either CPUs or GPUs. So, they're generally not going to be using regular processor instructions for main memory access [1].

Intel_Tech%20Tour%20TW_Lunar%20Lake%20AI%20Hardware%20Accelerators-37.png


In that case, the DMA engine is just for communication between the unified scratchpad SRAM and main memory.

As for data organization, NPUs tend to be organized in tiles, with each tile usually having its own direct-mapped memory (i.e. not a cache). Here's the Xilinx AI Engine that formed the basis of the XDNA NPU in AMD's Phoenix [2]:

amd_aie_mesh.png


You can see the data paths between the tiles and you can see the so-called memory tiles, but this view shown those "DM" blocks are are 64 kiB of local memory that's exclusive to each tile. The article text says each "Memory Tile" has 512 kiB of SRAM [2].

amd_xdna.jpg


What I find interesting about that is how they apparently have two levels of DMA engines! Each AI Engine has one, and then the global memory tiles also each seem to have one.

The architecture of a NPU is the same than a GPGPU (just watered down),
Sure, if you squint, they have a lot of similarities. That's why AI works pretty well on GPUs, in the first place. But, instead of looking to see how many similarities they have and then concluding they're the same, it's more profitable to look at where & why they differ. That is, if you'd like to gain any sort of insight into why so many companies independently arrived at the conclusion that AI is best served by a bespoke architecture.

the load/store instructions work on L2 or DDR (or sometimes TCM like Qualcomm).
TCM is somewhat akin to the local SRAM in the Xilinx/XDNA blocks, and it's clearly drained/filled to/from DRAM using DMA, not individual load/store instructions [3]:

hexagon_npu_overview.png


Compared to GPU, NPU currently have smaller operations sets (more density) and bigger L2 cache.
I think you mistakenly assume NPUs' local RAM is cache. It's not. Wherever you see a DMA engine, that's there to service a direct-mapped memory. The difference might seem subtle, but it's not - either from a power perspective or programmability.

By contrast, GPUs have local scratchpad memories, but not fine-grained DMA engines, as far as I've seen. They use ordinary load/store instructions, but then have a very deep SMT implementation to hide the long DRAM read latencies. SMT is cheap, but not free. You can see one of its costs in the massive size of GPUs' register files.

It's why NPU are contested, there selling point is that it's more efficient for AI but it's not like it is used 24h/24
Intel developed an entire, dedicated block for "ambient AI", called GNA.

"Intel® GNA is not intended to replace typical inference devices such as the CPU, graphics processing unit (GPU), or vision processing unit (VPU). It is designed for offloading continuous inference workloads including but not limited to noise reduction or speech recognition to save power and free CPU resources."

I guess they feel those use cases can adequately be addressed by their NPU, because they've ceased developing it beyond 3.0. However, Qualcomm also added a dedicated, always-on AI engine they call the "Sensing Hub" [4]:

"we added a dedicated always-on, low-power AI processor and we’re seeing a mind-blowing 5x AI performance improvement.

The extra AI processing power on the Qualcomm Sensing Hub allows us to offload up to 80 percent of the workload that usually goes to the Hexagon processor, so that we can save even more power. All the processing on the Qualcomm Sensing Hub is at less than 1 milliamps (mA) of power consumption."

for softwares it's a nightmare to have code that can run on all Processing Units for compatibility.
So only the builders make some functionalities use it, the developpers are not interested, they use the CPU or GPU if they need a neural network.
Android has a simplified API for using NPUs, that's akin to popular deep learning frameworks. It's the NPU equivalent of what APIs like OpenGL and Vulkan are for GPUs.

As mentioned above, Qualcomm flexibly migrates AI jobs between its sensing hub and Hexagon DSP, which they can only do because they both have the same API. They or others also enable flexible AI workload sharing/migration involving the GPU block.

References:
  1. https://www.anandtech.com/show/2142...chitecture-deep-dive-lion-cove-xe2-and-npu4/4
  2. https://chipsandcheese.com/2023/09/16/hot-chips-2023-amds-phoenix-soc/
  3. https://chipsandcheese.com/2023/10/04/qualcomms-hexagon-dsp-and-now-npu/hexagon_npu_overview/
  4. https://www.qualcomm.com/news/onq/2...ities-qualcomm-snapdragon-888-mobile-platform
 
Status
Not open for further replies.