Review Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
As stated, the score is tentative based on other testing that I’m still working on completing. A 4.5 star product means it delivers some impressive gains in performance as well as features and technology, not just that it’s a great value. It’s not always that much faster than a 4090 right now, due in part to the early nature of the drivers. IMO I suppose — we need to see how things develop.

Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Full ray tracing is definitely viable on the 5090. Coupled with the improved quality of DLSS transformers, and maybe (?) MFG, you will get more than a 30% increase in performance compared to the 4090. There will be scenarios, legitimate scenarios, where a 5090 is twice as fast as a 4090. Mostly in AI, but probably close in some full RT games as well. It’s okay by me if most of those scenarios won’t be in games.

I’m definitely not drinking the MFG hype, though. It’s fine that it exists, but just because you can interpolate three frames instead of one doesn’t mean you’ve actually doubled your performance. Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames. And it still wouldn’t be exactly the same as fully rendering every frame.

I suspect research into doing exactly that sort of frame generation is already happening and could arrive sooner rather than later. Because DLSS4 and Reflex 2 already get us pretty close to doing exactly that.

Overall, 78% more bandwidth, 33% more VRAM capacity, 30% more raw compute, FP4 support, an impressive cooling solution, twice the ray/triangle intersections per cycle, and some of the other neural rendering techniques? Yeah, that’s impressive to me and worthy of a 4.5 star score.
 
No one takes into account "value" in any product that's the best of the best that you can get. And that goes for anything in any category.
Unless that product is actually mass produced and for sale. I see this argument a lot and do not agree with it. This is a fracking GPU, not a 10 million dollar hyper car or one of a kind bauble. It is a consumer level product. Will it be purchased en mass? Absolutely, some people may even be able to justify this for AI workloads and such but I bet one thing, they will ALL be lamenting the price and question the value. I spend tens of thousands of dollars in tooling each year, sometimes hundreds of thousands. One year, it was 1.3 million for a company I didn't even work for. It's a business, personal financials are a business. Price and value will always matter.
 
Last edited:
As stated, the score is tentative based on other testing that I’m still working on completing. A 4.5 star product means it delivers some impressive gains in performance as well as features and technology, not just that it’s a great value. It’s not always that much faster than a 4090 right now, due in part to the early nature of the drivers. IMO I suppose — we need to see how things develop.

Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Full ray tracing is definitely viable on the 5090. Coupled with the improved quality of DLSS transformers, and maybe (?) MFG, you will get more than a 30% increase in performance compared to the 4090. There will be scenarios, legitimate scenarios, where a 5090 is twice as fast as a 4090. Mostly in AI, but probably close in some full RT games as well. It’s okay by me if most of those scenarios won’t be in games.

I’m definitely not drinking the MFG hype, though. It’s fine that it exists, but just because you can interpolate three frames instead of one doesn’t mean you’ve actually doubled your performance. Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames. And it still wouldn’t be exactly the same as fully rendering every frame.

I suspect research into doing exactly that sort of frame generation is already happening and could arrive sooner rather than later. Because DLSS4 and Reflex 2 already get us pretty close to doing exactly that.

Overall, 78% more bandwidth, 33% more VRAM capacity, 30% more raw compute, FP4 support, an impressive cooling solution, twice the ray/triangle intersections per cycle, and some of the other neural rendering techniques? Yeah, that’s impressive to me and worthy of a 4.5 star score.
I'm mostly with you here Jared but do those fancy numbers translate to an incredible uplift in onscreen performance and is the value proposition there? Mostly not. Some aspects of the 5090 are indeed impressive, but overall as a whole, there is nothing special about it. We can however agree to disagree. I got the relevant information from your excellent testing and write up (Thank you for this) but I do not believe, at least for now, that it is deserved a 4.5 rating. The further testing may change my mind.
 
At what cost pal? If you take value in the consideration, this is like the worst product release by Nvidia in the last 4 years.
No one takes "value
Unless that product is actually mass produced and for sale. I see this argument a lot and do not agree with it. This is a fracking GPU, not a 10 million dollar hyper car or one of a kind bauble. It is a consumer level product. Will it be purchased en mass? Absolutely, some people may even be able to justify this for AI workloads and such but I bet one thing, they will ALL be lamenting the price and question the value. I spend tens of thousands of dollars in tooling each year, sometimes hundreds of thousands. One year, it was 1.3 million for a company I didn't even work for. It's a business, like financials are a business. Price and value will always matter.
In volume The 5090 compared to the overall GPU market in its entirety is so ridiculously small. This is like going on a car forum and complaining about lambo costs or a watch forum and complaining about Rolex. People have been yelling and telling me the same thing back when I got a GTX 280 and 295 Almost 20 years ago. "why you buying that man it's so terrible, no value" Cool Bro. "value" is not something people think about at this level, and honestly shouldn't be thought about. At this level, you either want it or you don't. And there's nothing wrong with being in either of those camps.
 
Okay Jarred... you are shilling at this point.

4.5 / 5 for a 2000$ GPU that barely get 27% more performances?

While consuming 100W more than a 4090?

And offering the same cost per frame value as a 4090 from 2 years ago?

Flagship or not, this is horrible.

Not to mention the worst uplift from an Nvidia GPU ever achieved... 27%...
Unless that product is actually mass produced and for sale. I see this argument a lot and do not agree with it. This is a fracking GPU, not a 10 million dollar hyper car or one of a kind bauble. It is a consumer level product. Will it be purchased en mass? Absolutely, some people may even be able to justify this for AI workloads and such but I bet one thing, they will ALL be lamenting the price and question the value. I spend tens of thousands of dollars in tooling each year, sometimes hundreds of thousands. One year, it was 1.3 million for a company I didn't even work for. It's a business, personal financials are a business. Price and value will always matter.
People have their own subjective priorities or limits, whether it be power consumption, cost, or performance jump from last generation. A reviewer would be doing his readers a disservice by baking these priorities into the final rating, because much like baking a real cake you can just remove them after the fact. You see, one thing we can do on our own is look at the given rating and then apply our own limits for power or cost. What we can't do on our own is run a million tests and get raw results. That is Jarred's job and he does it well.
 
I was planning on some upgrades with the 5090 release, however, as it stands I cannot justify the cost. 2000 dollars to get ~25% raster performance increase over the 4090 with day one drivers is too hard a pill to swallow for me. If the 5090 was ~25% faster, and 1600 I would do it. If the 5090 was ~40% faster for 2000 I would do it. At ~25% and 2000 it seems like I am would be getting no real uplift in performance beyond cost in a linear fashion. I guess it's going to be my 3080 for another generation unless drivers improve and the 5090 falls in line with my expectations. 3.5 stars for me as performance stands with current drivers.
 
People have their own subjective priorities or limits, whether it be power consumption, cost, or performance jump from last generation. A reviewer would be doing his readers a disservice by baking these priorities into the final rating, because much like baking a real cake you can just remove them after the fact. You see, one thing we can do on our own is look at the given rating and then apply our own limits for power or cost. What we can't do on our own is run a million tests and get raw results. That is Jarred's job and he does it well.
A good take, and yes, Jarred does this well. I still do not agree with the 4.5 but that's OK. We don't have all the metrics yet.
 
  • Like
Reactions: artk2219
As stated, the score is tentative based on other testing that I’m still working on completing. A 4.5 star product means it delivers some impressive gains in performance as well as features and technology, not just that it’s a great value. It’s not always that much faster than a 4090 right now, due in part to the early nature of the drivers. IMO I suppose — we need to see how things develop.

Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Full ray tracing is definitely viable on the 5090. Coupled with the improved quality of DLSS transformers, and maybe (?) MFG, you will get more than a 30% increase in performance compared to the 4090. There will be scenarios, legitimate scenarios, where a 5090 is twice as fast as a 4090. Mostly in AI, but probably close in some full RT games as well. It’s okay by me if most of those scenarios won’t be in games.

I’m definitely not drinking the MFG hype, though. It’s fine that it exists, but just because you can interpolate three frames instead of one doesn’t mean you’ve actually doubled your performance. Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames. And it still wouldn’t be exactly the same as fully rendering every frame.

I suspect research into doing exactly that sort of frame generation is already happening and could arrive sooner rather than later. Because DLSS4 and Reflex 2 already get us pretty close to doing exactly that.

Overall, 78% more bandwidth, 33% more VRAM capacity, 30% more raw compute, FP4 support, an impressive cooling solution, twice the ray/triangle intersections per cycle, and some of the other neural rendering techniques? Yeah, that’s impressive to me and worthy of a 4.5 star score.
I might quibble a tiny bit on some of this but only between a 4 or 4.5. People focus on just one thing and make their case on it but all of them need to be thrown in. Also, price is a big one to end users but should it be for reviewers? I mean you are saying whether the product is good or not, and really what is there out there that compares to this. I guess you could argue the 4090 is a better perf per dollar, but that really depends on what you are doing. I also think people need to realize 25% more cost for retail with 30-40% more work throughput is a big win. Over the lifetime of the unit that will make money for anyone working in Blender or something heavily gpu tied.

One thing I might say is a score now feels premature. The drivers not being ready are a bit concerning to me, and without the DLSS scores for gaming at least the results are iffy. Hard to really judge things without all of that thrown in. The RT and everything we have seen shows this part is where the doors get blown off so I am going to wait and see if the 4.5 is earned, or if it should be a 4.

Either way it's a quibble. Solid review, and good hard work. Everyone is a critic. Often including me :)

ps if I send you a 3080 would you add it to your review haha
 
  • Like
Reactions: artk2219
I’m definitely not drinking the MFG hype, though. It’s fine that it exists, but just because you can interpolate three frames instead of one doesn’t mean you’ve actually doubled your performance.

From your experience, is MFG something that could become available for RTX-40 series, through separate mods for each game?
 
  • Like
Reactions: artk2219
Even if this was the exact same price as previous gen, if you look at pure technology:
30% increase in cores
30% increase in power
<30% increase in performance?
Where is the innovation or tech improvement?

Wouldn't really be mad if this was a mid-cycle refresh and this was a 4090ti product. If Nvidia released this product with the 4090ti name, nobody would bat an eye. However, its gonna be another 2-3 years before the next generation again.

I guess nobody should've been expecting much since node improvements have been getting less and less noticeable over the years and Nvidia didn't even bother with 3nm from TSMC this time.
I think they are in a tik tok kind of cycle now. This is the 4090 second gen really. Not a huge swap for anything but the AI side. Everything else was just packed in. Will have to wait for the full dlss side though.
 
Did you bench into an open bench or a PC case? I am asking because there is some major concerns of overheating because the CPU coolers is choked by the 575W of heat dissipation inside a closed PC Case. If you have an HSF for your CPU, then you are screwed.

7a5bf4d586b20ffe0aa6281c57d419012a32cbdabd43b3e8050d2aa9a00d6cc1.png
I got downvoted to h@#$ and back on reddit when I said that pairing a 5090 FE with a 13900K cooled with a NHD15 or other high end air cooler isn't going to fly. Seeing that it already cooks a 7800X3D means it's tough sledding with a much hotter Raptor.
 
  • Like
Reactions: artk2219
Products like this should receive 3stars max. Great performance but at what cost? Is it the right direction that power draw increases at each iteration? Is it worth to chase max perf each time? For me it would be perfect if 5000 series stayed at same TDP as previous ones - meaning better design, better gpu - with understandably lower increase of perf compared to 4000. And then, 6000 series to have even reversed direction: higher perf with drop of TDP.

And secondary, how come 500W gpu can be air cooled, but nerds on forums will claim you absolutely NEED water cooling for 125W ryzen, cause "highend"?. Yeah, i know 125W means more actual draw.

Short answer is form factor.

Long answer,

All PC cooling is air cooling, competitive Liquid nitrogen not withstanding. AIO and custom loop "Water" only use water as a heat transport, its still air cooling. Air cooling uses heat exchangers to exchange heat with ambient air, the speed and amount of heat that can be exchanged is based on the total radiative surface area, total volume of air being moved and the difference in temperature between them.

The amount of radiator you can stick on a CPU socket is very limited due to the design of the socket and board. Water cooling uses water to transport the heat to a much bigger radiator equipped with larger fans, this means more surface area and more airflow.

Example is the Corsair X7 360mm, a really thicc radiator.

https://www.corsair.com/us/en/p/cus...r7-360mm-water-cooling-radiator-cx-9030005-ww

You can then equip three to six 120mm fans for insane amounts of airflow.
Furthermore you can put multiple radiators in a system for even more total radiative surface area with that much more airflow. Instead of one or two CPU fans screaming at high RPM you can have three, five, six or ten spinning at low RPM.


Now as for this GPU, it's using flow through design which is the exact same system that radiator I posted uses. This is where you put a radiative surface and just blow air straight across it without any sort of restriction or redirection of the airflow. These fans are going to suck air from the left side of the card, blow it across the radiative surfaces and out the right side of the card, right at the CPU. This is really bad if the CPU is using a tower air cooler. The ability of that tower cooler to exchange heat with the air is going to be lower due to the difference in temperature being lower (air is warmer). If you are using a AIO or custom loop solution then it won't mater since that radiator is placed at a different location and use's it's own flow through design.
 
Amazing but expected as far as I am concerned. I don't think there is a lot of need to compare against anything including AMDs new cards. It was brilliant to pair with the 13900 with crazy interesting results. Will read a few more times to glean more details. 👍
It's about time one of these big tech sites finally did a direct comparison between the 9800X3D and something like a 13900K at 4K Ultra resolutions. Granted they've only reviewed raster performance so far, we really need to see what DLSS + FG does to the scores between these CPUs with the 5090. It looks like that part is coming and I will be anxiously awaiting it.

It does look like the 3D Vcache heavily benefited Baldur's Gate 3 and MS Flight Sim 2024 in those results, but the rest of the games were interestingly back and forth.
 
Hmm this review is about what I expected it to be, though the cards performance is way under what it should be. That memory bus increase is ridiculous, 512 bit of GDDR7 is monstrous. Individual unit performance increase combined with more units combined with more memory bandwidth should of been in the +40~50% range over the previous generation.

Now we know why nVidia instructed people to "Review it using special guidelines that favor us" with the whole DLSS FG crap.

I'm a lot more interested in where the 5060 / 5070 and 5080 fall.
 
Last edited:
Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.
So who is the target audience for this class of card that once was $600 max? If it is the millionaires who do not care, and if it was priced $20,000, would it affect your 4.5/5 rating?!
 
A GPU that costs $2000 gets 4.5/5 star while the reviewer acknowledges that even though the price it is out of reach of most gamers, it will not affect the rating. So is it logical to assume Tom's Hardware is also targeting millionaire class audience as well?