News PowerColor's new tech uses the NPU to reduce gaming power usage — vendor-provided benchmarks show up to 22.4% lower power consumption

The NPU which was used here tactfully regulates the power supply to the GPU based on an algorithm.

This relieves the voltage regulator module and MOSFETs of the card, by offloading the burden from the VRMs and MOSFETs. That's why we can see power savings and also lower temperature values here.

The NPU itself requires roughly 2.5 watts though, and is soldered to the video card.

So this is how it stacks up based on the testing done by the company. Cyberpunk 2077 and Final Fantasy XV tested on a "prototype" AMD 7900 XTX:
  • Default power settings: 263.2W / 338.6W
  • AMD ECO mode settings: 214.5W / 271.4W
  • With Power Saving (NPU): 205.3W / 262.9W
 
Last edited by a moderator:
We don't know when or where we'll next see Edge AI spring up in PowerColor's or AMD's stable.

BTW, this is PowerColor's own in-house invention tech (confirmed this from a company rep), and has nothing to do with AMD in general. So it seems unlikely we will see other board partners/AIBs implement this tech in their upcoming GPUs in near future, unless of course they are interested.

However, a consumer version of this modified RX 7900 XTX may be released soon by PowerColor. But it's not going to be cheap.

They also reiterated the fact that AMD's upcoming RX 8000 series would NOT be released until early 2025 (unless plans change), so they will focus on the current GPU lineup for now.
 
I was just about to wonder aloud why we couldn’t use this technology to overclock the gpu, but now I understand that it doesn’t effect the gpu chips thermals/power consumption, just the supporting power regulation facets on the board. Yet another interesting application of AI. Sounds like adding this NPU is expensive though?
 
  • Like
Reactions: truerock
Using an NPU alongside a GPU will certainly add to the overall cost. But how feasible and useful this tech feature really is in real world gaming still remains to be seen.

PowerColor might soon release a consumer version of this modified RX 7900 XTX GPU in China though, so I expect to see some gaming benchmarks as well, alongside power consumption values.
 
  • Like
Reactions: truerock
I was just about to wonder aloud why we couldn’t use this technology to overclock the gpu, but now I understand that it doesn’t effect the gpu chips thermals/power consumption, just the supporting power regulation facets on the board. Yet another interesting application of AI. Sounds like adding this NPU is expensive though?
Small NPUs are very cheap. A Google Coral on a m.2 is like $35 is probably sufficient for such a small simple model. They can probably buy the actual chips in bulk for $15 a piece.
 
  • Like
Reactions: truerock
We sure this isn't the GPU being 'restrained' to a smoother-lower framerate, then using the NPU to AFMF-generate frames?

The power savings seems in-line with fake frames and GPU clock/rendering moderation, not 'enhanced AI power control'.
 
  • Like
Reactions: truerock
What is with this NPU marketing mumbo jumbo. Isn't it just a cluster of "tensor" FMAC execution units? Shouldn't GPU vendors just include them already in newer mobile GPUs?
You see, what they are making is actually a TPU (tensor processing unit because they advertise speed in TOPS), but they call it NPU (neural processing unit) because Apple does that with their chip and they want it to sound smart like Apple tech -- monkey see, monkey do.
 
Whether or not the NPU proves to be feasible to fit to all GPUs or, just too expensive to implement widely, efficiency in computing, needs to be taken more seriously. Increases in performance need to be delivered by efficiency, not just by throwing more power at everything. For example, fast SSDs should not need elaborate cooling measures...
 
Whether or not the NPU proves to be feasible to fit to all GPUs or, just too expensive to implement widely, efficiency in computing, needs to be taken more seriously. Increases in performance need to be delivered by efficiency, not just by throwing more power at everything. For example, fast SSDs should not need elaborate cooling measures...
Agreed. Not using pcie5 drives until they can saturate the interface without active cooling.
 
  • Like
Reactions: bit_user
Well, it appears that TUL corporation, owner of the PowerColor brand, is using Kneron's AI-powered NPU (Neural Processing Unit) processor here.

TUL will integrate this technology into its consumer brand PowerColor's Hellhound and Red Devil series graphics cards.


About Kneron:

 
Never heard about Kneron company before. But if TUL has adopted this processor, I expect other GPU brands under TUL to adopt it as well.
 
You see, what they are making is actually a TPU (tensor processing unit because they advertise speed in TOPS),
TOPS is short for "Trillions of Operations Per Second" and not limited to tensors. The operations they're counting are addition and multiplication, with multiply-accumulate or fused multiply-add counting as two ops.

they call it NPU (neural processing unit) because Apple does that with their chip
No, the reason is that it's optimized for inferencing neural networks. They're optimized to do that and not much else. They also lack the programming features that would make them easy enough for more general-purpose usage, so it makes sense just to treat them as a black box.

As I've said, I'm skeptical that this graphics card is using a true NPU, but there are some designed for low-power embedded use that should be both small enough and cheap enough that it's not inconceivable they are.
 
Increases in performance need to be delivered by efficiency, not just by throwing more power at everything.
Each new generation of CPUs is more efficient than the previous (if we're counting real generations, and not just rebranding), but the performance dividends provided by those efficiency improvements aren't enough for these companies to win the performance crown and so they crank up the power to squeeze out a few more FPS.

If you run these processors within their efficiency window, you can indeed get better performance than previous-gen products at the same or less power. The problem is that the market values performance more than efficiency, so whoever delivers the best performance can command the highest prices. The only time efficiency becomes a deal-breaker is in servers and some laptops.

For example, fast SSDs should not need elaborate cooling measures...
The problem with SSDs is a little different. PCIe standards have gotten ahead of SSD tech, meaning the fastest drives needed to burn lots of power to get even close to saturating it. Very few people actually need a PCIe 5.0 SSD and the benefits over a fast PCIe 4.0 drive are negligible. It's just companies fighting over a valuable market niche that have pushed them into the territory where they have to burning so much power to get a performance edge.

Not sure if you saw this, but it looks like SSD controllers might finally be starting to catch up.

BTW, I did not expect PCIe 5.0 to reach client platforms in 2021. We didn't sure need it then, and the benefits even now are still dubious (IMO). At least when AMD launched PCIe 4.0, they also had graphics cards that could use it. Even that was a marginal win, performance-wise, but if you take something like a RTX 4090 and compare it running at PCIe 3.0 vs 4.0, at least you can now see a measurable difference.

Agreed. Not using pcie5 drives until they can saturate the interface without active cooling.
Last year, I bought a Samsung 990 Pro, which is still PCIe 4.0. It offers more than enough performance for me, but without such unreasonable power utilization.

BTW, the version I bought was the one without a heatsink, because it defaults to a more efficient performance profile. The motherboard has an integrated SSD heatsink, but I leave it in efficiency mode and prefer just to have it run at cooler temperatures.
 
Last edited:
  • Like
Reactions: slightnitpick
Never heard about Kneron company before.
There were indeed lots of AI hardware startups already beginning to emerge around that time (2015). The market for embedded inferencing is much higher volume and a simpler problem than training, so this is where a lot of them focused. I haven't tried to keep track of them or which are still alive and kicking.

if TUL has adopted this processor, I expect other GPU brands under TUL to adopt it as well.
Uh, what other brands do they own?
 
Last edited:
Uh, what other brands do they own?

Hiya bit !

The following brands come under TUL, or should I rather say used to be a part of TUL corp. ?.

PowerColor (still very active).

VTX3D, Vertex3D. (but the company has been out of GPU business since 2017).

Diamond Multimedia (but they have stopped making gaming cards). PowerColor is the parent company of this brand though.

SPARKLE (Active, but currently their main focus is on Intel ARC GPUs, but AMD is in the pipeline). Although they say they are a private firm, TUL has been the brand's main holder. https://www.sparkle.com.tw/

DATALAND (Just went bankrupt last month, lol. So we are not sure what the future holds for this company). Website is down, and so are the official stores JD and TMALL:

http://www.dataland.com.cn/Admin/StaleDatedPagePC.aspx


So basically we are looking at only 2 active brands !
 
Last edited by a moderator: