News Leakers revise RTX 5090 and RTX 5080 power draw to 575W and 360W respectively — Up to 27% higher than last-generation RTX 40 GPUs

These are peak power usage ratings,. While they need to be accounted for, they don't tell us how much power the cards will typically draw. 4090 was rated 100W more than the 3090 yet could use less power when gaming.

power-gaming.png


Granted, there's no significant node improvement like with Ada. Reviews aren't far off.
 
  • Like
Reactions: 80251 and usertests
I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care.
It’s a lot like cars in the 70s. The biggest, baddest, fastest cars were the least efficient and greatest waste generators. But still, you could buy a Datsun to meet your needs.
 
  • Like
Reactions: Heiro78
I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care.
We should see the top performing cards performing at the same level or not more than 3 -5 percent better than the previous generation but using much less power. The chase for ever increasing power consumption when technology is giving us the chance to slow things down is disappointing.
 
With these power numbers you have to start thinking NV hitting a performance wall like Intel did in the old days, more power for little true real world gains.
I think AMD doing the right thing, concentrate on low/mid tiers, AI, power efficiency and come back in a couple of years and surprise NV.
 
  • Like
Reactions: Sleepy_Hollowed
I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care.
this is a bunch of horse s**t. you don't seem to know what the word "efficiency" means or you're just a little slow.

GPUs are getting more efficient every launch. GPUs are able to perform more work for every watt consumed when compared to the models they replace. 4090 uses less power on average than the 3090 but runs OVER 50% faster. these efficiency gains are absolutely mind blowing. it's exactly why nvidia can't satisfy demand and their stock has been skyrocketing.

at the same time the top end GPUs are bigger leaps than before. if you don't want a 575W GPU, then don't spend $2000+ on the biggest GPU money can buy.

power-gaming.png
relative-performance_3840-2160.png
 
We should see the top performing cards performing at the same level or not more than 3 -5 percent better than the previous generation but using much less power. The chase for ever increasing power consumption when technology is giving us the chance to slow things down is disappointing.
that's likely true on a direct core to core comparison, but with more cores, likely more clockspeed and gobs more memory bandwidth it'll be hard to compare directly.

I expected gddr7 + 512 bit bus to take a big bite out of the power budget, but now I'm seeing rumors of gddr7 in laptops, so maybe that stuff is more efficient than I think.
With these power numbers you have to start thinking NV hitting a performance wall like Intel did in the old days, more power for little true real world gains.
I think AMD doing the right thing, concentrate on low/mid tiers, AI, power efficiency and come back in a couple of years and surprise NV.
Nah. No power wall in sight. A fully enabled ad102 (rtx6000) at 300w is a very good performer. Contrast that with 1200w datacenter gpu packages and it's plain there's a very wide range of power targets available for whatever market they want to aim at.
 
The number mean nothing until we see performance per watt. If they pushed 20% higher on top end power but perform 30% faster across the board then they are an efficiency gain...not a big one but still that is how efficiency works. it's not about less power, it's about power per performance.

I think the big push will be dlss 4 and comparing dlss performance per power draw. I think there will be a large jump there compared to non dlss performance and that will long term for Nvidia be where its largest gains go. Wil the first DLSS nvidia based console coming shortly in the Switch 2 I think it will get even wider adoption and we will start to see that be the performance leader.

AI will slowly overshadow all of it and make performance comparisons that much harder.

To me it is more a price per perf issue as that I don't think they will control but we can hope they go the super route instead of the 4080 route.
 
  • Like
Reactions: usertests
The 5090 is moving to a 512-bit bus from 384-bit of 3090/4090, and is very large, about twice the 5080, so high power draw isn't unexpected. But it will be a lot lower than 575W in almost any gaming scenario.
 
These are peak power usage ratings,. While they need to be accounted for, they don't tell us how much power the cards will typically draw. 4090 was rated 100W more than the 3090 yet could use less power when gaming.
You shouldn't use the launch review numbers because they're not accurate. Those numbers were from their 5800X test platform which was holding back performance. It's still true that the power numbers are peak and won't max out the card for every title, but the 4090 definitely doesn't use less power than the 3090.

power-gaming.png

power-raytracing.png
 
Last edited:
There was always likely to be a power increase due to the lack of a node shrink. Nvidia always seems to have a very specific plan for where performance of their parts fall so this is likely to ensure those margins. In the case of the 5090 there are additional reasons for the higher increase amount since it's a larger chip, has more memory and a wider memory bus.

I don't have any interest in having something with 5090 levels of power consumption, but high end cards landing in the 300-450W space doesn't seem unreasonable. With any luck the increases down the stack will be kept to a minimum. Of course I have no plans on buying another nvidia based video card at this point, but who knows what the future holds.
 
You shouldn't use the launch review numbers because they're not accurate. Those numbers were from their 5800X test platform which was holding back performance. It's still true that the power numbers are peak and won't max out the card for every title, but the 4090 definitely doesn't use less power than the 3090.

power-gaming.png

power-raytracing.png
This is the test used to measure power usage. 60% faster than the 3090 while using less power. Does not look like it is being held back any. Power usage is going to vary between games. I'm not saying the 4090 will always use less.

https://tpucdn.com/review/nvidia-ge...s-edition/images/cyberpunk-2077-2560-1440.png
 
This is the test used to measure power usage. 60% faster than the 3090 while using less power. Does not look like it is being held back any. Power usage is going to vary between games. I'm not saying the 4090 will always use less.

https://tpucdn.com/review/nvidia-ge...s-edition/images/cyberpunk-2077-2560-1440.png
It definitely is and that was a driver behind them changing their test platform in the middle of the 40 series launch cycle (they moved over to RPL shortly after launch and I think only the 4090/4080 were reviewed with the 5800X system): https://www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-core-i9-12900k/

They still use CP2077 for gaming power testing they just shifted to 4k/low textures. This puts a more realistic load on high end cards.
 
4090 uses less power on average than the 3090 but runs OVER 50% faster. these efficiency gains are absolutely mind blowing. it's exactly why nvidia can't satisfy demand and their stock has been skyrocketing.
I think it's more the datacenter demand, and the current demand for AI that's having the biggest effect on the stock prices, and not the consumer-oriented GPUs.
 
With these power numbers you have to start thinking NV hitting a performance wall like Intel did in the old days, more power for little true real world gains.
I think AMD doing the right thing, concentrate on low/mid tiers, AI, power efficiency and come back in a couple of years and surprise NV.
In my opinion, it is not Nvidia that hit a performance wall, but the reality is that it is getting very challenging to shrink transistors. As a result, you can tell we have been on 5nm and 3nm for quite a number of years now. There are a lot of rumors that TSMC's 2nm is so costly that even Apple prefers to stick on 3nm for their next gen SOC. In this case, Nvidia chose to stick to 5nm for their Blackwell chips, which is the same as Ada. So unsurprisingly, the power consumption will increase substantially.
 
These are peak power usage ratings,. While they need to be accounted for, they don't tell us how much power the cards will typically draw. 4090 was rated 100W more than the 3090 yet could use less power when gaming.

power-gaming.png


Granted, there's no significant node improvement like with Ada. Reviews aren't far off.
I may be wrong, but Ada is likely an outlier because Nvidia transitioned from a dated Samsung 10nm for their Ampere, to TSMC's cutting edge 5nm for Ada. They may have underestimated the improvements for the node when designing/ developing Ada, so we can see across the board, all the Ada based GPUs are very efficient and you can probably squeeze more performance out of it if Nvidia did not put a hard power cap on them. In my experience, I am able to undervolt a RTX 4070 Ti Super from 285W to almost 200W in the same gaming load.

Blackwell is not that far off, so we know in another month or so. I do think the GDDR7 will be very power hungry and the increase in CUDA core in the 5090 will result in a significant power draw on the same TSMC 5nm. Furthermore, Nvidia is also including more types of processors on the chip for AI, DLSS, RT, etc, all will require power.
 
It definitely is and that was a driver behind them changing their test platform in the middle of the 40 series launch cycle (they moved over to RPL shortly after launch and I think only the 4090/4080 were reviewed with the 5800X system): https://www.techpowerup.com/review/rtx-4090-53-games-ryzen-7-5800x-vs-core-i9-12900k/

They still use CP2077 for gaming power testing they just shifted to 4k/low textures. This puts a more realistic load on high end cards.

There isn't enough information given by techpowerup to conclude why the power usage is different between the results we posted. Your bottlenecking claim doesn't hold water when most cards saw a power increase. Could be a different benchmark location within the game, different settings. A 3060Ti isn't getting bottlenecked a 1440p in CP2077. Regardless, it doesn't invalidate my point at all that TDP does not tell us what typical usage will be. 3090ti and 4090 have the same TDP. Even in the chart you posted, the two aren't close at all in power usage.
 
I kept waiting for the industry to hit a correction and focus on efficiency for a while, but sadly, consumers are voting with their wallets and don't seem to care.

Considering how dismal Arrow Lake has been selling, and only the 9800X3D selling out regularly with Zen5, tells you that performance is king.
 
There isn't enough information given by techpowerup to conclude why the power usage is different between the results we posted. Your bottlenecking claim doesn't hold water when most cards saw a power increase.
Nothing outside of the high end cards went up any real amount, and as I already said they switched to 4k with low textures on top of the faster platform.
Regardless, it doesn't invalidate my point at all that TDP does not tell us what typical usage will be.
I literally agreed with you on this in my first post I simply wanted to correct the 3090 using more power than 4090 part.