Review Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I think people are looking at two different things here:

1 - How good is it?
2 - How good is it for the money?

That's two very different questions. I think @JarredWaltonGPU is looking at the first whereas a lot of people responding to the review are looking at the second. I certainly can't afford a 5090 and might be looking at a 5060 when it comes out, but that doesn't mean if someone gave me a 5090 I'd be unhappy with it. Also, Jarred has pointed out that the resulting score is currently tentative as there is more testing being done. So at this point, the rating is based on "this is the best consumer card out there", not "this is the best consumer card most people should spend their money on" - that'll probably be around the 5070, if Nvidia's track record is anything to go by.

Mindstab Thrull
 
This is like the that girl you been trying to hook up with for the last 2 years … finally happens and you realize well overpromised and under delivered. The sandwich you made last night was the real highlight!
 
Personally I would give it 3 stars at best, its more expensive, not necessarily faster depending on the work load, more picky on what CPU's it works best with, and uses more power. I wouldn't call that a guaranteed winner, it does have a very impressive cooler though.
You're going to cherry pick one test? lol no. It is Unequivocally faster. Giving it at best 3 starts and probably more like 2.5 in your view is stupid. And I say that as someone who has zero intention of buying this card.
 
>As stated, the score is tentative based on other testing that I’m still working on completing.

A numeric score will always be controversial regardless of how "accurate" it is. People emphasize different things. It's like movie reviews. Some review have a score, some don't. I suppose a score is good for drive-by readers who spend less than a minute to read the pro/con and the TL/DR.

I find some aspects of the review superfluous, if not irrelevant. The notion of "value" is one. Gamers who pay $2K+ for a GPU don't care about bang/buck, or watts/perf--or at least, those are bottom priorities. Non-gamers aka professionals who buy 5090 for productivity work don't care either. $2K may be expensive for gaming purposes, but it is cheap for AI or any use by which earns you an income.

I get that every PC aspect revolves around gaming here. But 5090, like 4090, is a multi-purpose product, and gaming is arguably the least important for the main demographic target. I thought $2K pricing was on the low-end. Nvidia could've priced it $2.5K and it still would've been well received, albeit not by gamers.

Below is a review from a mainly non-gaming perspective. I also agree with its take of MFG: Regardless of how useful one thinks MFG is, it's better to have it than not have it. (Also, note that nowhere is notion of "value" raised. Value for pros is different from that for gamers.)

>Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Spot on. The problem you face is that 99% of your readers here are gamers. Your review, as with most other reviews, focuses on gaming. For a gaming review then, I think your readers would've preferred you take a more downbeat tone to match their negative sentiment. We like reviews more when they agree with our preconceived notions. (I don't care either way, as I don't have a dog in this hunt.)

A couple of other reviews call the 5090 as a "4090 Ti," which I think is an accurate and succinct take. But it's just a label, like people (you?) calling the 4060 the "4050." 4060 is selling well, just as 5090 will sell well, despite the moaning & groaning from handful of malcontents (who likely aren't in the target market anyway.)


>Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

>Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames.

I disagree, at least conceptually. For FG to work, you have to know the end point, ie the next actual frame. IOW, it has to be interpolation, not just (linear) projection from current & past motion. Your argument, from a previous post, that with a high enough frame rate, the projection errors would be fleeting enough to not be noticeable is wrong. The 3:1 ratio of generated-to-rendered frames is the same regardless of framerate, and will definitely be noticeable as visual artifacts. The only way to mitigate these errors is to use interpolation, ie to take the actual end frame into account. Faster framerate does nothing in this regard.

We've discussed this before. I've argued for FG to become a larger factor in gaming, and to a large extent, that has become true. FG is a recommended setting in many game optimization guides, and it has become a default setting in a number of AAA titles. Common sense says that latency can be further mitigated, eg by generating frames with data from the end-frame as that is being rendered, but not needing to wait for the end-frame to finish rendering.

Maybe Nvidia will do this, or maybe not. I think we can agree that Nvidia sets the pace here, and AMD & Intel are content to be followers, at least for gaming. I don't think we should expect much innovation in the gaming sphere, as most incentive to differentiate (and improve) will be on AI, for both consumer and business use. AI accellerator cum GPU will be a thing, and my projection is that's where Intel & AMD will try to make their mark.
 
Last edited:
I think the biggest takeaway I have from the 5090 launch is that nvidia didn't do much to improve the architecture itself between Ada and Blackwell. They've added bells and whistles, and it's undoubtedly better for "AI" workloads but that just doesn't translate to a generational uplift. The additional memory bandwidth doesn't seem to be very important given the massive gen on gen increase which leads me to believe this also was mostly driven by "AI" (24Gb GDDR7 would have gotten them 36GB total capacity with a 384-bit bus).

From what I'm seeing it seems that at least the FE model cannot really be used with an air cooled CPU. Hopefully this is just due to the FE cooler design rather than just the higher power consumption.

These are the things I've found disappointing regarding the launch since with the halo cards I don't particularly care about the cost as they've always been absurdly expensive.

I hope the more affordable cards will make more sense though it seems like one can draw a line from core count increase to rough performance increase. Personally speaking the best part of these launches is how there hasn't been a compelling performance per dollar increase to make me want to replace my card.
 
We've discussed this before. I've argued for FG to become a larger factor in gaming, and to a large extent, that has become true. FG is a recommended setting in many game optimization guides, and it has become a default setting in a number of AAA titles.
One of the uses of FG is to save power (for whatever reason, be it cost or high temperatures/noise). So can the reviewer check if it is possible to set the game, for example CP2077, to enable FG/MFG mode with frame rate limit for vsync, or if Reflex1/2 will block it.
 
Not really impressive, and given the fact that this architecture needed a respin and was overheating in data centers along with them removing the hotspot readout leads me to believe it's not performing as well as they planned. Not enough to justify a $400 price increase.

But it doesn't matter. Nvidia would rather use their TSMC allocation on data center dies.
 
  • Like
Reactions: atomicWAR
Even so, would you consider an upgrade? If memory serves, you have a 4090, right?
First off, excellent memory. Second, I don't see any reason to upgrade with a roughly 25% increase in raster. DLSS and frame gen are great but I need that base/non-AI frame rate to have a better boost. As a 4K@144hz panel gamer I anticipate I will be well served by my 4090 in the foreseeable future. I just could not justify paying 2K or likely more for a very inconsistent card with such a low increase to overall performance. IF Nvidia wants my money I need to see at least a 60% (prefer 80+) increase in my fps which is why I typically upgrade ever other generation. I saw this coming for a host of reasons the least of which is the need for a better node (not Nvidia's fault with current TSMC pricing) plus even 4090's can be CPU bottlenecked in some games at 4K and I figured correctly that it translated into even larger bottlenecks with the 5090. After Zen 6 drops and RTX 60 hit the market...that is when I see myself making a move.
 
No one takes "value

In volume The 5090 compared to the overall GPU market in its entirety is so ridiculously small. This is like going on a car forum and complaining about lambo costs or a watch forum and complaining about Rolex. People have been yelling and telling me the same thing back when I got a GTX 280 and 295 Almost 20 years ago. "why you buying that man it's so terrible, no value" Cool Bro. "value" is not something people think about at this level, and honestly shouldn't be thought about. At this level, you either want it or you don't. And there's nothing wrong with being in either of those camps.
Is it though that small? I think you haven’t watched the GPU market in the last 10 years … maybe if you think about just gamers … but these cards aren’t going to be just used by gamers , which is why they are hard to come by. content creators, crypto, and AI utilized these cards as entry in those use cases as they are actually on the cheap end for those use cases. I’d wager if you expand the use case then you’ll see the forest and giant money pool Jnsen dives into daily.
 
  • Like
Reactions: JarredWaltonGPU
I saw this coming for a host of reasons the least of which is the need for a better node (not Nvidia's fault with current TSMC pricing)

Thanks for taking the time to reply.

So, let me get this straight: the node is pretty much the reason we didn't see a generational leap in performance, similar to the one from 3090 to 4090?
 
  • Like
Reactions: atomicWAR
Is it though that small?

Yes.

The only people using one of these for "AI" is home users messing around.

Companies with actual budgets use or lease these


 
Last edited:
  • Like
Reactions: TCA_ChinChin
I think the biggest takeaway I have from the 5090 launch is that nvidia didn't do much to improve the architecture itself between Ada and Blackwell. They've added bells and whistles, and it's undoubtedly better for "AI" workloads but that just doesn't translate to a generational uplift. The additional memory bandwidth doesn't seem to be very important given the massive gen on gen increase which leads me to believe this also was mostly driven by "AI" (24Gb GDDR7 would have gotten them 36GB total capacity with a 384-bit bus).

From what I'm seeing it seems that at least the FE model cannot really be used with an air cooled CPU. Hopefully this is just due to the FE cooler design rather than just the higher power consumption.

These are the things I've found disappointing regarding the launch since with the halo cards I don't particularly care about the cost as they've always been absurdly expensive.

I hope the more affordable cards will make more sense though it seems like one can draw a line from core count increase to rough performance increase. Personally speaking the best part of these launches is how there hasn't been a compelling performance per dollar increase to make me want to replace my card.
They don’t have too … and this is when the game of monopoly gets boring … turning over the board and storming out the room isn’t even fun. About the best you can do is hide the car so no one can pick it next round. But this game is just you paying rent until bankruptcy.
 
>As stated, the score is tentative based on other testing that I’m still working on completing.

A numeric score will always be controversial regardless of how "accurate" it is. People emphasize different things. It's like movie reviews. Some review have a score, some don't. I suppose a score is good for drive-by readers who spend less than a minute to read the pro/con and the TL/DR.

I find some aspects of the review superfluous, if not irrelevant. The notion of "value" is one. Gamers who pay $2K+ for a GPU don't care about bang/buck, or watts/perf--or at least, those are bottom priorities. Non-gamers aka professionals who buy 5090 for productivity work don't care either. $2K may be expensive for gaming purposes, but it is cheap for AI or any use by which earns you an income.

I get that every PC aspect revolves around gaming here. But 5090, like 4090, is a multi-purpose product, and gaming is arguably the least important for the main demographic target. I thought $2K pricing was on the low-end. Nvidia could've priced it $2.5K and it still would've been well received, albeit not by gamers.

Below is a review from a mainly non-gaming perspective. I also agree with its take of MFG: Regardless of how useful one thinks MFG is, it's better to have it than not have it. (Also, note that nowhere is notion of "value" raised. Value for pros is different from that for gamers.)

>Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Spot on. The problem you face is that 99% of your readers here are gamers. Your review, as with most other reviews, focuses on gaming. For a gaming review then, I think your readers would've preferred you take a more downbeat tone to match their negative sentiment. We like reviews more when they agree with our preconceived notions. (I don't care either way, as I don't have a dog in this hunt.)

A couple of other reviews call the 5090 as a "4090 Ti," which I think is an accurate and succinct take. But it's just a label, like people (you?) calling the 4060 the "4050." 4060 is selling well, just as 5090 will sell well, despite the moaning & groaning from handful of malcontents (who likely aren't in the target market anyway.)


>Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

>Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames.

I disagree, at least conceptually. For FG to work, you have to know the end point, ie the next actual frame. IOW, it has to be interpolation, not just (linear) projection from current & past motion. Your argument, from a previous post, that with a high enough frame rate, the projection errors would be fleeting enough to not be noticeable is wrong. The 3:1 ratio of generated-to-rendered frames is the same regardless of framerate, and will definitely be noticeable as visual artifacts. The only way to mitigate these errors is to use interpolation, ie to take the actual end frame into account. Faster framerate does nothing in this regard.

We've discussed this before. I've argued for FG to become a larger factor in gaming, and to a large extent, that has become true. FG is a recommended setting in many game optimization guides, and it has become a default setting in a number of AAA titles. Common sense says that latency can be further mitigated, eg by generating frames with data from the end-frame as that is being rendered, but not needing to wait for the end-frame to finish rendering.

Maybe Nvidia will do this, or maybe not. I think we can agree that Nvidia sets the pace here, and AMD & Intel are content to be followers, at least for gaming. I don't think we should expect much innovation in the gaming sphere, as most incentive to differentiate (and improve) will be on AI, for both consumer and business use. AI accelerator cum GPU will be a thing, and my projection is that's where Intel & AMD will try to make their mark.
If you look, I do a lot of non-gaming tests as well. Yes, gaming is a big focus, but it's not the only focus. And I also have to consider the fact that, barring a 5090 Ti or Blackwell Titan, this will undoubtedly be the fastest GPU around for the next two years. How can the fastest cards around with new features that will probably be sold out warrant a 3-star score (as someone else suggested)? That's ludicrous to me.

This is an amazing piece of hardware, even if it's not 50% faster than the 4090 in every scenario. And that performance right now is clearly a product of early drivers. These are literally the first public (ish) drivers for 5090. Nvidia internally can't test everything, and bugs and issues will slip through. The 9800X3D vs 13900K results prove how messed up things can be at times. Do I dock half a star for what will inevitably be a relatively fleeting problem? Again, I don't think that's warranted.

(Intel's driver problems are a different story, because we've seen the same things again and again for over two years. New games come out, we get some oddities. But the 4090 and 4080 Super worked fine with these drivers and it was only Blackwell having issues, and not even consistently so. I anticipate fixes will come in the next month or so.)

As you properly point out, value when you're looking at this sort of hardware is incredibly subjective. If you ONLY care about games, with zero interest in AI? Sure, it's probably a 3.5-star card because it's very expensive for relatively minor performance gains. More heavy RT testing will show larger gains, though, and that's what I'm currently working on doing.

Framegen and MFG are going to be very subjective as well. My personal experience with framegen is that you need a base framerate of 40~45 for it to feel "good" — meaning 80~90 FPS after framegen (and without framegen you'd probably be getting 55~65 FPS because FG only adds ~50% on Nvidia). If that same rule of thumb applies, we'll need MFG to provide 160~180 FPS, which means you'll want a 240 Hz display for it to be "useful," even on lower tier 50-series GPUs. I don't think a 5070 is going to do 4K at 160+ FPS without performance mode upscaling, though... but DLSS Transformers maybe makes that less of a concern.

Anyway, initial testing out of the way, I'm now poking at MFG and DLSS 4 stuff to try to determine what is and isn't a good experience. Stay tuned... that page six needs a lot of additional testing and data! 🙃
 
Products like this should receive 3stars max. Great performance but at what cost? Is it the right direction that power draw increases at each iteration? Is it worth to chase max perf each time? For me it would be perfect if 5000 series stayed at same TDP as previous ones - meaning better design, better gpu - with understandably lower increase of perf compared to 4000. And then, 6000 series to have even reversed direction: higher perf with drop of TDP.

And secondary, how come 500W gpu can be air cooled, but nerds on forums will claim you absolutely NEED water cooling for 125W ryzen, cause "highend"?. Yeah, i know 125W means more actual draw.

i think your forgetting that a cpu has a ihs spreader which is also a barrier between direct contact no such thing for a gpu its a bare die so heat transfer away is more efficient.
 
  • Like
Reactions: TJ Hooker
I think people are looking at two different things here:

1 - How good is it?
2 - How good is it for the money?

That's two very different questions. I think @JarredWaltonGPU is looking at the first whereas a lot of people responding to the review are looking at the second. I certainly can't afford a 5090 and might be looking at a 5060 when it comes out, but that doesn't mean if someone gave me a 5090 I'd be unhappy with it. Also, Jarred has pointed out that the resulting score is currently tentative as there is more testing being done. So at this point, the rating is based on "this is the best consumer card out there", not "this is the best consumer card most people should spend their money on" - that'll probably be around the 5070, if Nvidia's track record is anything to go by.

Mindstab Thrull
Value for money should take out one full point at LEAST.
 
Yes.

The only people using one of these for "AI" is home users messing around.

Companies with actual budgets use or lease these
Exactly, that’s my point. this isn’t an enterprise AI card … you aren’t using a true AI card for game playing, does it even have dvci ports? You also aren’t dropping the 10,000k a true cheap AI card will cost … or building the relevant system to support it but you can mess around. Hence it’s a low bar of entry for those interested.
 
Exactly, that’s my point. this isn’t an enterprise AI card … you aren’t using a true AI card for game playing, does it even have dvci ports? You also aren’t dropping the 10,000k a true cheap AI card will cost … or building the relevant system to support it but you can mess around. Hence it’s a low bar of entry for those interested.

The user base for that sort of thing is still very small, which was the point.

A regular home user is not dropping $2000 to "mess with AI". Someone with a large amount of disposable income might do this as a project, that is the very definition of niche. This is going to be purchased by PC gamers with large amounts of disposable income, or those willing to make very foolish financial decisions.


For the big time AI stuff, you don't need to buy server equipment, in fact most don't. You "rent" it via a subscription system from a vendor where you pay per hour / unit of utilization. That cost is then rolled into your yearly OpEx and used like any business expense, to reduce tax liability.
 
So, let me get this straight: the node is pretty much the reason we didn't see a generational leap in performance, similar to the one from 3090 to 4090?
It's part of the reason, for sure. 4NP (used on Blackwell B200 data center chips) is a decent improvement over 4N (used on Ada, Hopper, and consumer Blackwell). I'm not sure if N3B would be that much better for a GPU? Maybe that's why Nvidia didn't go there. Maybe 4NP is already basically at the level of N3B! I'm not entirely sure on that stuff, and it's not readily available information.

We are also well past the point where we get good scaling on power and such from newer nodes. That's why we've seen stuff like A100 at 700W, H100/H200 at up to 1000W (I think?), and now B200 at up to 1350W or whatever. For faster chips, companies are now having to push power limits further along with die sizes, and both of those are expensive approaches.

I'm very curious now as to where Nvidia goes with the next generation. Will it use a TSMC N2 variant, or maybe Samsung or Intel can get something out that will be competitive. Doubtful but not impossible.

We're also hitting CPU limits very hard. There's little scaling in performance from 4090 to 5090 at anything below 4K. Even games aren't seeing big gains at 4K if they're pure rasterization titles. Which is partly why Nvidia is pushing RT I think as it's probably easier to double the RT performance than to try to double the rasterization performance.

If we had a "Ryzen 11 12950X3D" right now, that was legitimately 30% faster single-threaded performance than the 9800X3D, I suspect performance on the 5090 would look better in a lot of scenarios as well.
 
  • Like
Reactions: valthuer
5090 is annoying to me, I am considering upgrading, but then I know that in ~2 years from now there will definitely be a process improvement to 3nm which will bring major boost naturally just from that.

The AI improvements in 5090 are considerable, but the actual full support for all that is nowhere near and who knows if by the times games actually start using all these new techniques, 5090 will find themselves in the same situation as Series 20 users - where finally its RT capabilities are utilized, but it just was too outdated to actually enjoy that.

Whatever 5090 tries to do, will be the way forward one way or another. But you can just buy a better card in 2 years from now that will have all that and more, while actually finally starting to see real games using these features, as opposed to whole ~7 games now that barely use 30% of what this GPU is capable of.
 
The user base for that sort of thing is still very small, which was the point.

A regular home user is not dropping $2000 to "mess with AI". Someone with a large amount of disposable income might do this as a project, that is the very definition of niche. This is going to be purchased by PC gamers with large amounts of disposable income, or those willing to make very foolish financial decisions.


For the big time AI stuff, you don't need to buy server equipment, in fact most don't. You "rent" it via a subscription system from a vendor where you pay per hour / unit of utilization. That cost is then rolled into your yearly OpEx and used like any business expense, to reduce tax liability.
I think there are way more companies using consumer hardware to do AI related work than most people believe. Yes, training LLMs generally needs the data center stuff, and you can lease that. But research and development people trying to do inference workloads are for sure buying 4090/4080, and will also buy 5090. And sometimes, you want a card that can also do real-time graphics as well as AI — it may not be better at AI, but if it's good enough and can also run in your typical PC?

Why was 4090 banned from sale to China? Because there were ways to make that work, not as equivalent to H100 but as something that was usable with the right software stack. There are images of racks of systems with 4090 cards, and I'm sure they're not trying to do crypto mining on them these days. They're being used to research AI because if you can get a $2000 anywhere close to matching Nvidia's data center $50,000+ cards? It becomes an interesting business model.

What can you do with 4-way RTX 5090 and 128GB of VRAM for training AI models? I don't know enough to say what's possible, but it's definitely more useful than 4-way 4090 with 96GB of VRAM. LOL