Review Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We're also hitting CPU limits very hard. There's little scaling in performance from 4090 to 5090 at anything below 4K. Even games aren't seeing big gains at 4K if they're pure rasterization titles. Which is partly why Nvidia is pushing RT I think as it's probably easier to double the RT performance than to try to double the rasterization performance.

I don't know if i'm right, but, as a fellow forum poster once wrote, i'm getting the sense that this push for better graphics is hitting a wall of diminishing returns.
 
If you look, I do a lot of non-gaming tests as well. Yes, gaming is a big focus, but it's not the only focus. And I also have to consider the fact that, barring a 5090 Ti or Blackwell Titan, this will undoubtedly be the fastest GPU around for the next two years. How can the fastest cards around with new features that will probably be sold out warrant a 3-star score (as someone else suggested)? That's ludicrous to me.

This is an amazing piece of hardware, even if it's not 50% faster than the 4090 in every scenario. And that performance right now is clearly a product of early drivers. These are literally the first public (ish) drivers for 5090. Nvidia internally can't test everything, and bugs and issues will slip through. The 9800X3D vs 13900K results prove how messed up things can be at times. Do I dock half a star for what will inevitably be a relatively fleeting problem? Again, I don't think that's warranted.

(Intel's driver problems are a different story, because we've seen the same things again and again for over two years. New games come out, we get some oddities. But the 4090 and 4080 Super worked fine with these drivers and it was only Blackwell having issues, and not even consistently so. I anticipate fixes will come in the next month or so.)

As you properly point out, value when you're looking at this sort of hardware is incredibly subjective. If you ONLY care about games, with zero interest in AI? Sure, it's probably a 3.5-star card because it's very expensive for relatively minor performance gains. More heavy RT testing will show larger gains, though, and that's what I'm currently working on doing.

Framegen and MFG are going to be very subjective as well. My personal experience with framegen is that you need a base framerate of 40~45 for it to feel "good" — meaning 80~90 FPS after framegen (and without framegen you'd probably be getting 55~65 FPS because FG only adds ~50% on Nvidia). If that same rule of thumb applies, we'll need MFG to provide 160~180 FPS, which means you'll want a 240 Hz display for it to be "useful," even on lower tier 50-series GPUs. I don't think a 5070 is going to do 4K at 160+ FPS without performance mode upscaling, though... but DLSS Transformers maybe makes that less of a concern.

Anyway, initial testing out of the way, I'm now poking at MFG and DLSS 4 stuff to try to determine what is and isn't a good experience. Stay tuned... that page six needs a lot of additional testing and data! 🙃
Excited to see the results if only so one can extrapolate the lower tier cards, roughly anyways. As for the star ratings. I never like overall scores like this, I prefer every category broken rated separately as it gives a more nuanced conclusion at a glance. For what it's worth, something can still be the best, and yet suck in more than a few ways. Cost and value will always be subjective, which is another reason I prefer the category style rating systems. Yes, I understand everything I want is in TFA. Anyways..

As a gamer, I'm most impressed with the cooler, even though it brings some caveats with overall system cooling. I do believe this may have more to do with the overall TGP getting above a critical point, rather than the cooler itself, once partner cards are reviewed we will know. The graphics performance is, well kind of expected considering it's on the same process as the 40 series. Still, it's a bit of a let down. The price is disappointing, considering the overall increase in performance. Unfortunately, if one NEEDS that little uplift, it's the only game in town.

As for non gaming workloads. I feel this is where all of Blackwell will really shine compared to prior gen. We are in a market now where gaming is not the priority. These GPUs are very similar to AMD's X3D processors, designed first and foremost for data centre (and AI) workloads. They just also happen to be decent at 3D rendering for gaming. In short, we're kinda getting the scraps. It used to be the other way around. I liked that better.

On MFG and DLSS in general? I find it very VERY hit and miss still, both in quality and performance. It varies by title (the real problem, you never know what any driver or game update may break).These issues I see are across multiple systems and are consistent across those systems. I'm not optimistic, but I will be happy to be wrong.
 
I don't know if i'm right, but, as a fellow forum poster once wrote, i'm getting the sense that this push for better graphics is hitting a wall of diminishing returns.

Whoever he was... he was right.

After reading this thread I am definitely not going to be rushing out to get a 5090.

27% uplift? I'm good with the 4090 for now. I will upgrade to the 5090 later this year when they are more readily available but don't see myself getting one before summer.

Do you think people won't buy this on "0% interest if paid in full" promotions???

Newegg is offering $1350 in trade for my 4090. That would be the only reason I'd upgrade honestly. Not having to deal with CL or eBay and just paying the $700 difference is a win win to move up to the next gen regardless of the low increase in performance.
 
I miss the days when the 80 would be around 700 and the 80ti/90 around 1000, and gen to gen increase would be around 70%. Here the performance/money is about constant.

Raytracing improvement is good though, bandwidth, memory and RTX seem good.

I can't stop remarking how good the AMD cards are good and much better than nvidia's for CAD.
 
  • Like
Reactions: Peksha and artk2219
As stated, the score is tentative based on other testing that I’m still working on completing. A 4.5 star product means it delivers some impressive gains in performance as well as features and technology, not just that it’s a great value. It’s not always that much faster than a 4090 right now, due in part to the early nature of the drivers. IMO I suppose — we need to see how things develop.

Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Full ray tracing is definitely viable on the 5090. Coupled with the improved quality of DLSS transformers, and maybe (?) MFG, you will get more than a 30% increase in performance compared to the 4090. There will be scenarios, legitimate scenarios, where a 5090 is twice as fast as a 4090. Mostly in AI, but probably close in some full RT games as well. It’s okay by me if most of those scenarios won’t be in games.

I’m definitely not drinking the MFG hype, though. It’s fine that it exists, but just because you can interpolate three frames instead of one doesn’t mean you’ve actually doubled your performance. Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames. And it still wouldn’t be exactly the same as fully rendering every frame.

I suspect research into doing exactly that sort of frame generation is already happening and could arrive sooner rather than later. Because DLSS4 and Reflex 2 already get us pretty close to doing exactly that.

Overall, 78% more bandwidth, 33% more VRAM capacity, 30% more raw compute, FP4 support, an impressive cooling solution, twice the ray/triangle intersections per cycle, and some of the other neural rendering techniques? Yeah, that’s impressive to me and worthy of a 4.5 star score.
Impressive, yes, useful in any meaningful way? Not so much. AI is junk, fabricated fake images and fabricated fake porn, plagiarized text for the intellectually lazy, utterly pointless.
Frame rates over 60hz are simply dick waving exercises, so what's the point of this space heater?
 
I miss the days when the 80 would be around 700 and the 80ti/90 around 1000, and gen to gen increase would be around 70%. Here the performance/money is about constant.

Raytracing improvement is good though, bandwidth, memory and RTX seem good.

I can't stop remarking how good the AMD cards are good and much better than nvidia's for CAD.
Can you elaborate on that CAD subject? I know it's offtopic, but please - why do you think it's better for such tasks? It is so rarely tested.
 
  • Like
Reactions: artk2219
And secondary, how come 500W gpu can be air cooled, but nerds on forums will claim you absolutely NEED water cooling for 125W ryzen, cause "highend"?. Yeah, i know 125W means more actual draw.
Thermal density. CPUs dies are much smaller than large GPU dies, so less area for thermal transfer. And even within those smaller dies the heat is generated disproportionately in certain small regions. So for a given amount of power/heat, your CPU hotspot is going to get hotter, faster, compared to a GPU.

Plus some other stuff, like GPUs having direct-die cooling like someone else mentioned.
 
Last edited:
True, it is a barrier. But it does not matter. Ihs has no idea whats taking the heat.

ihs is the limiting factor thats why some people delid. you can transfer heat more effectively with the cooler directly on the cpu die but the issue is you may crush the cpu.

the other issue is transporting a delicate part like that to end user and hoping they dont crush it.

in a nut shell it would have to be assembled at factory for most end users to the motherboard to get the same benefit as a gpu and would take away alot of the ability to upgrade cpu to get similar cooling.
 
  • Like
Reactions: artk2219
Thermal density. CPUs dies are much smaller than large GPU dies, so less area for thermal transfer. And even within those smaller dies the heat is generated disproportionately on certain small reasons. So for a given amount of power/heat, your CPU hotspot is going to get hotter, faster, compared to a GPU.

Plus some other stuff, like GPUs having direct-die cooling like someone else mentioned.

this as well explained much better then i could lol.
 
Thanks for taking the time to reply.

So, let me get this straight: the node is pretty much the reason we didn't see a generational leap in performance, similar to the one from 3090 to 4090?
A better node is typically where we'd see most of a generational uplift happen allowing for greater transistor density and thus higher core counts/clocks. But as stated CPU bottlenecks are also part of the issue. Hope that helps.
 
  • Like
Reactions: artk2219
The user base for that sort of thing is still very small, which was the point.

A regular home user is not dropping $2000 to "mess with AI". Someone with a large amount of disposable income might do this as a project, that is the very definition of niche. This is going to be purchased by PC gamers with large amounts of disposable income, or those willing to make very foolish financial decisions.


For the big time AI stuff, you don't need to buy server equipment, in fact most don't. You "rent" it via a subscription system from a vendor where you pay per hour / unit of utilization. That cost is then rolled into your yearly OpEx and used like any business expense, to reduce tax liability.
Depends on what you are trying to do. You wouldn’t utilize this card for big time anything AI … it would make no sense to do so because it’s woefully lacking in power to do so.

But Imagine you are training to train a small set LLM but highly accurate to prove a. Concept lower requirements, or even local execution for hosting llm gardent or doing something novel … while you could rent it , you could likely utilize local hardware as you say to mess around … there are a million reasons I could think of to utilize it, or ways to utilize it, if you are a startup or a corporation sure rent this ewuipment to avoid the capex of setup for big budget project but that’s not the use case and if you think it’s cheaper than buying hardware in this use case you are wrong. No one is talking about building out or utilizing a multi-tier data center or running a large scale project. Or comparing it to that use case. Either way have you done it? Cloud/ remote hosting is not cheaper … it’s just more convenient for scale, operational maintenance, etc … because you are essentially being provided hardware as a service being cheaper it is not. It may have lower upfront cost dependent on circumstance.
 
Thermal density. CPUs dies are much smaller than large GPU dies, so less area for thermal transfer. And even within those smaller dies the heat is generated disproportionately in certain small regions. So for a given amount of power/heat, your CPU hotspot is going to get hotter, faster, compared to a GPU.

Plus some other stuff, like GPUs having direct-die cooling like someone else mentioned.
only true with 250+ watts CPU ... upto 200W CPU can be air cooled.
 
  • Like
Reactions: artk2219
The Nvidia GeForce RTX 5090 Founders Edition begins its reign as the fastest GPU for this generation, but there are a few mishaps along the way. We're also still working on additional testing so this is currently a work in progress.

Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles : Read more
Ok now the big question....when will the sales start? ie when should we start hitting refresh like mad on the nvidia, bestbuy, and newegg sites, while cursing all of the scalpers.
 
  • Like
Reactions: artk2219
"The RTX 5090 is the sort of GPU that every gamer would love to have"
Hah, not 1440/1080 gamer, and also not non-x3d gamer at any resolution, and not fsr/dlss/xess gamer, and not RTX40 series gamer, and not all other gamer?)

"new architecture, which requires different driver tuning to extract maximum performance. Blackwell isn't just Ada Lovelace with FP4 support, in other words, and the drivers at times feel a bit raw."
Okay, we need deep dive in arch.

4.5 stars really? For 50% chance of regression at any resolution, and if you don't have 9800x3d, and for over $2000?
This is definitely not a gaming GPU. And even for AI with a limited memory pool
"The RTX 5090 is the sort of GPU that every gamer would love to have"
yea... for those with a rich dad, open bench setup and a big A/C
 
You're going to cherry pick one test? lol no. It is Unequivocally faster. Giving it at best 3 starts and probably more like 2.5 in your view is stupid. And I say that as someone who has zero intention of buying this card.
Hence why that entire comment was prefaced with the statement "personally". You can rate it however you would like, you also glossed over the other reasons, but to each their own.
 
Last edited:
I think there are way more companies using consumer hardware to do AI related work than most people believe.
No not really. Consumer products used forAI are just home users or self employed contractors doing GenAI. Artists or other content creators.

Companies who want to do that or more just rent it from a provider. Basically AIaaS is in full swing now, I'm getting pelted dozens of times a day with them trying to sell us on their services. While our executive team and business units are chomping at the bit, our CIO has opted for a slow and steady approach.

Nobody and I mean nobody is running a business on a bunch of thrown together 4090s. Then liability alone would have the lawyers screaming at us not to mention CAPEX vs OPEX. Medium and larger businesses just do not operate that way. Taking on major liability to save a measly 20 or 30k, it's just really bad business. Instead you contract it out to a service provider complete with SLAs and legal obligations. That service provider will own and operate the physical infrastructure and they aren't using 4090s.
 
Last edited:
MFG matters most for lower-tier GPUs, starting with the 5080, so exploring that for your review of 5080 next week I feel would be of more impact to regular gamers--even though the 5090 would garner more attention, being a halo product.
MFG will inherently be worse on lower tier products due to needing more sacrifices to get an acceptable minimum frame rate. The biggest problem with frame generation in general is how different the experience can be from game to game. A reviewer can't really make any general statements regarding frame generation due to that fact. It's certainly a valuable technology to investigate, but it doesn't really fit in well with video card reviews.
 
  • Like
Reactions: artk2219
I dont see this a plus until you test the card on Gen4 PCIe and compare.
Techpowerup already has a review of this up. PCIe 5 make zero difference. While we're at it, PCIe 4 made zero difference as well at 4k with PCIe 3 only 2% slower than PCIe 5. PCIe 2 barely slows things down at 6% loss at 4k. Don't use a PCIe 1.1 slot with this card.
 
No not really. Consumer products used forAI are just home users or self employed contractors doing GenAI. Artists or other content creators.

Companies who want to do that or more just rent it from a provider. Basically AIaaS is in full swing now, I'm getting pelted dozens of times a day with them trying to sell us on their services. While our executive team and business units are chomping at the bit, our CIO has opted for a slow and steady approach.

Nobody and I mean nobody is running a business on a bunch of thrown together 4090s. Then liability alone would have the lawyers screaming at us not to mention CAPEX vs OPEX. Medium and larger businesses just do not operate that way. Taking on major liability to save a measly 20 or 30k, it's just really bad business. Instead you contract it out to a service provider complete with SLAs and legal obligations. That service provider will own and operate the physical infrastructure and they aren't using 4090s.
China wasn't stockpiling 4090's before the US ban because they were concerned Chinese gamers weren't going to be able to buy 4090's.
https://www.tomshardware.com/news/chinese-factories-add-blowers-to-old-rtx-4090-cards
 
But research and development people trying to do inference workloads are for sure buying 4090/4080, and will also buy 5090.
This is a big misconception - you don't need a super fast GPU for inference if you don't sell this inference. And even in this case, the first and only thing that matters for inference is the VRam capacity, and that's it. There is no point in a super fast machine if it can only go a couple of miles on a full tank.
Yes, these people will buy them by the pallet and put 4-8 pcs in the chassis. The dual-slot design will help a lot with this) This is an analogue of mining fever. But this has nothing to do with ordinary people. This does not mean that the GPU is worthy of 5 stars. You forget that AI has not yet brought profit to anyone, and it has not brought it to the buyers of H100/200 who are now renting out their capacities at a loss - you can rent an H100 for a year at the price of $2000. For home use for output this is the stupidest purchase - you definitely need a GPU with at least 20GB VRam and that's it - they are all supported by llama.cpp and work at about the same speed.
 
Last edited: