Review Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
We're also hitting CPU limits very hard. There's little scaling in performance from 4090 to 5090 at anything below 4K. Even games aren't seeing big gains at 4K if they're pure rasterization titles. Which is partly why Nvidia is pushing RT I think as it's probably easier to double the RT performance than to try to double the rasterization performance.

I don't know if i'm right, but, as a fellow forum poster once wrote, i'm getting the sense that this push for better graphics is hitting a wall of diminishing returns.
 
If you look, I do a lot of non-gaming tests as well. Yes, gaming is a big focus, but it's not the only focus. And I also have to consider the fact that, barring a 5090 Ti or Blackwell Titan, this will undoubtedly be the fastest GPU around for the next two years. How can the fastest cards around with new features that will probably be sold out warrant a 3-star score (as someone else suggested)? That's ludicrous to me.

This is an amazing piece of hardware, even if it's not 50% faster than the 4090 in every scenario. And that performance right now is clearly a product of early drivers. These are literally the first public (ish) drivers for 5090. Nvidia internally can't test everything, and bugs and issues will slip through. The 9800X3D vs 13900K results prove how messed up things can be at times. Do I dock half a star for what will inevitably be a relatively fleeting problem? Again, I don't think that's warranted.

(Intel's driver problems are a different story, because we've seen the same things again and again for over two years. New games come out, we get some oddities. But the 4090 and 4080 Super worked fine with these drivers and it was only Blackwell having issues, and not even consistently so. I anticipate fixes will come in the next month or so.)

As you properly point out, value when you're looking at this sort of hardware is incredibly subjective. If you ONLY care about games, with zero interest in AI? Sure, it's probably a 3.5-star card because it's very expensive for relatively minor performance gains. More heavy RT testing will show larger gains, though, and that's what I'm currently working on doing.

Framegen and MFG are going to be very subjective as well. My personal experience with framegen is that you need a base framerate of 40~45 for it to feel "good" — meaning 80~90 FPS after framegen (and without framegen you'd probably be getting 55~65 FPS because FG only adds ~50% on Nvidia). If that same rule of thumb applies, we'll need MFG to provide 160~180 FPS, which means you'll want a 240 Hz display for it to be "useful," even on lower tier 50-series GPUs. I don't think a 5070 is going to do 4K at 160+ FPS without performance mode upscaling, though... but DLSS Transformers maybe makes that less of a concern.

Anyway, initial testing out of the way, I'm now poking at MFG and DLSS 4 stuff to try to determine what is and isn't a good experience. Stay tuned... that page six needs a lot of additional testing and data! 🙃
Excited to see the results if only so one can extrapolate the lower tier cards, roughly anyways. As for the star ratings. I never like overall scores like this, I prefer every category broken rated separately as it gives a more nuanced conclusion at a glance. For what it's worth, something can still be the best, and yet suck in more than a few ways. Cost and value will always be subjective, which is another reason I prefer the category style rating systems. Yes, I understand everything I want is in TFA. Anyways..

As a gamer, I'm most impressed with the cooler, even though it brings some caveats with overall system cooling. I do believe this may have more to do with the overall TGP getting above a critical point, rather than the cooler itself, once partner cards are reviewed we will know. The graphics performance is, well kind of expected considering it's on the same process as the 40 series. Still, it's a bit of a let down. The price is disappointing, considering the overall increase in performance. Unfortunately, if one NEEDS that little uplift, it's the only game in town.

As for non gaming workloads. I feel this is where all of Blackwell will really shine compared to prior gen. We are in a market now where gaming is not the priority. These GPUs are very similar to AMD's X3D processors, designed first and foremost for data centre (and AI) workloads. They just also happen to be decent at 3D rendering for gaming. In short, we're kinda getting the scraps. It used to be the other way around. I liked that better.

On MFG and DLSS in general? I find it very VERY hit and miss still, both in quality and performance. It varies by title (the real problem, you never know what any driver or game update may break).These issues I see are across multiple systems and are consistent across those systems. I'm not optimistic, but I will be happy to be wrong.
 
I don't know if i'm right, but, as a fellow forum poster once wrote, i'm getting the sense that this push for better graphics is hitting a wall of diminishing returns.

Whoever he was... he was right.

After reading this thread I am definitely not going to be rushing out to get a 5090.

27% uplift? I'm good with the 4090 for now. I will upgrade to the 5090 later this year when they are more readily available but don't see myself getting one before summer.

Do you think people won't buy this on "0% interest if paid in full" promotions???

Newegg is offering $1350 in trade for my 4090. That would be the only reason I'd upgrade honestly. Not having to deal with CL or eBay and just paying the $700 difference is a win win to move up to the next gen regardless of the low increase in performance.
 
I miss the days when the 80 would be around 700 and the 80ti/90 around 1000, and gen to gen increase would be around 70%. Here the performance/money is about constant.

Raytracing improvement is good though, bandwidth, memory and RTX seem good.

I can't stop remarking how good the AMD cards are good and much better than nvidia's for CAD.
 
As stated, the score is tentative based on other testing that I’m still working on completing. A 4.5 star product means it delivers some impressive gains in performance as well as features and technology, not just that it’s a great value. It’s not always that much faster than a 4090 right now, due in part to the early nature of the drivers. IMO I suppose — we need to see how things develop.

Obviously, at $2000 this is going to be way out of reach of most gamers. And I really don’t think Nvidia cares that much. There will be plenty of businesses and professionals and AI researchers that will pay $2000 or more and consider themselves lucky.

Full ray tracing is definitely viable on the 5090. Coupled with the improved quality of DLSS transformers, and maybe (?) MFG, you will get more than a 30% increase in performance compared to the 4090. There will be scenarios, legitimate scenarios, where a 5090 is twice as fast as a 4090. Mostly in AI, but probably close in some full RT games as well. It’s okay by me if most of those scenarios won’t be in games.

I’m definitely not drinking the MFG hype, though. It’s fine that it exists, but just because you can interpolate three frames instead of one doesn’t mean you’ve actually doubled your performance. Now, if we were combining something like Reflex 2 in-painting and projection and time warp with frame generation? That’s what we actually need to see.

Render frame one, sample user input while projecting frames two, three, and four. Then render frame five and project with user input sampling and projecting the next three frames… That would actually make a game look and feel responsive, rather than just smoothing frames. And it still wouldn’t be exactly the same as fully rendering every frame.

I suspect research into doing exactly that sort of frame generation is already happening and could arrive sooner rather than later. Because DLSS4 and Reflex 2 already get us pretty close to doing exactly that.

Overall, 78% more bandwidth, 33% more VRAM capacity, 30% more raw compute, FP4 support, an impressive cooling solution, twice the ray/triangle intersections per cycle, and some of the other neural rendering techniques? Yeah, that’s impressive to me and worthy of a 4.5 star score.
Impressive, yes, useful in any meaningful way? Not so much. AI is junk, fabricated fake images and fabricated fake porn, plagiarized text for the intellectually lazy, utterly pointless.
Frame rates over 60hz are simply dick waving exercises, so what's the point of this space heater?
 
I miss the days when the 80 would be around 700 and the 80ti/90 around 1000, and gen to gen increase would be around 70%. Here the performance/money is about constant.

Raytracing improvement is good though, bandwidth, memory and RTX seem good.

I can't stop remarking how good the AMD cards are good and much better than nvidia's for CAD.
Can you elaborate on that CAD subject? I know it's offtopic, but please - why do you think it's better for such tasks? It is so rarely tested.
 
A all-around expensive disappointment. While it performed well in a lot of games I saw, it's not enough to warrant the price tag, especially when we know that it 100% will not be $2000 due to shortages and will be the 4090 all over again.
 
And secondary, how come 500W gpu can be air cooled, but nerds on forums will claim you absolutely NEED water cooling for 125W ryzen, cause "highend"?. Yeah, i know 125W means more actual draw.
Thermal density. CPUs dies are much smaller than large GPU dies, so less area for thermal transfer. And even within those smaller dies the heat is generated disproportionately in certain small regions. So for a given amount of power/heat, your CPU hotspot is going to get hotter, faster, compared to a GPU.

Plus some other stuff, like GPUs having direct-die cooling like someone else mentioned.
 
Last edited:
True, it is a barrier. But it does not matter. Ihs has no idea whats taking the heat.

ihs is the limiting factor thats why some people delid. you can transfer heat more effectively with the cooler directly on the cpu die but the issue is you may crush the cpu.

the other issue is transporting a delicate part like that to end user and hoping they dont crush it.

in a nut shell it would have to be assembled at factory for most end users to the motherboard to get the same benefit as a gpu and would take away alot of the ability to upgrade cpu to get similar cooling.
 
Thermal density. CPUs dies are much smaller than large GPU dies, so less area for thermal transfer. And even within those smaller dies the heat is generated disproportionately on certain small reasons. So for a given amount of power/heat, your CPU hotspot is going to get hotter, faster, compared to a GPU.

Plus some other stuff, like GPUs having direct-die cooling like someone else mentioned.

this as well explained much better then i could lol.
 
  • Like
Reactions: TJ Hooker
Thanks for taking the time to reply.

So, let me get this straight: the node is pretty much the reason we didn't see a generational leap in performance, similar to the one from 3090 to 4090?
A better node is typically where we'd see most of a generational uplift happen allowing for greater transistor density and thus higher core counts/clocks. But as stated CPU bottlenecks are also part of the issue. Hope that helps.