Review Nvidia GeForce RTX 5090 Founders Edition review: Blackwell commences its reign with a few stumbles

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Don't agree on that, as:
1) there are things that you NEED to focus on in a game, e.g. the aim dot, and the critical flight display at flight sim where the number ghosting makes the "smooth" frame completely meaningless.
That's my whole point here - raster render those parts, the rest of slop can be done by AI in the very same frame.

Easy. And that's how RTX hair actually works in the new version of Indiana Jones.

A lot of your "real" frames are going to be enhanced by AI/tensor and you won't even notice. Do you need to raster render every strand of hair? Not really, it's not important - but it does look good touched up. There is a ton of stuff that can and will be offloaded to AI in the otherwise raster rendered frame and it will only get more common.
 
That's my whole point here - raster render those parts, the rest of slop can be done by AI in the very same frame.

Easy. And that's how RTX hair actually works in the new version of Indiana Jones.

A lot of your "real" frames are going to be enhanced by AI/tensor and you won't even notice. Do you need to raster render every strand of hair? Not really, it's not important - but it does look good touched up. There is a ton of stuff that can and will be offloaded to AI in the otherwise raster rendered frame and it will only get more common.
My view are kind of agree but on partly opposite idea, you can use them to assist in really fine detail like hair or grass, but not much more, the basic texture, lighting and shape all need good ol rasterisation. Enhanced is fine for me, but generating a whole frame is meh~ not to say for latency, it will always add more latency on it so it will always be a less preferrable method
 
My view are kind of agree but on partly opposite idea, you can use them to assist in really fine detail like hair or grass, but not much more, the basic texture, lighting and shape all need good ol rasterisation. Enhanced is fine for me, but generating a whole frame is meh~ not to say for latency, it will always add more latency on it so it will always be a less preferrable method
Lighting is better done with RT, so that eventually goes away too. That's why Indiana Jones and soon new Doom require Ray Tracing capable card.

A lot of things are going to be offloaded from shaders to tensor cores - that's where the effort is clearly going. IMO, it will explode as soon as there is a new gen of consoles that are rumored to be using upcoming AMD UDNA architecture that goes that way too.
 
Lighting is better done with RT, so that eventually goes away too. That's why Indiana Jones and soon new Doom require Ray Tracing capable card.
Let's start with the fact that you do not understand the issue of what RT is, how it is implemented and how it works. And also do not know that specially dedicated equipment is not required to calculate RT - look at Lumen in UE5 - this is a real RT and this is software rendering on good old SM / CU, and it does not require rt-hw. If engine developers for any reason believe that they need hw-rt equipment for their calculations, this is their problem) Something like sponsored games cp2077 / id tech 7. These are paid technology demonstrators, and nothing more. Red engine is used in 1 game, and the next one will be released on UE5. On Id tech 7 now there are only two games and this is a drop in the ocean) Meanwhile, you just shake these shiny things and talk about some kind of future of technology, LoL
 
Let's start with the fact that you do not understand the issue of what RT is, how it is implemented and how it works. And also do not know that specially dedicated equipment is not required to calculate RT - look at Lumen in UE5 - this is a real RT and this is software rendering on good old SM / CU, and it does not require rt-hw. If engine developers for any reason believe that they need hw-rt equipment for their calculations, this is their problem) Something like sponsored games cp2077 / id tech 7. These are paid technology demonstrators, and nothing more. Red engine is used in 1 game, and the next one will be released on UE5. On Id tech 7 now there are only two games and this is a drop in the ocean) Meanwhile, you just shake these shiny things and talk about some kind of future of technology, LoL
TBF having ideas or dreams on future tech is fine, but the issue of these marketing thing is that they are usually having major drawbacks, and are either tech demo, or some sort of band aid to fix something broken (unplayable raster), and then they upcharge us for those still not mature stuffs.

Personally in the RTX 20x0, I am kind of excited with their new tech support, and same when 30x0 release, but just after the scalping and then the official scalper price and the trend going on to the DLSS being the only way out is... not comforting at least.
 
  • Like
Reactions: Peksha
Every person's sensitivity is different.

I use 120hz for work, but don't mind lower resolutions.

Many others prefer high resolutions, but don't mind lower refresh rates.
Of course. That’s my whole point. Just because high refresh rate is great for some doesn’t mean everyone needs it. Same with high resolutions.
 
Of course. That’s my whole point. Just because high refresh rate is great for some doesn’t mean everyone needs it. Same with high resolutions.
If we start talking about the subjective "needs" of people then the lowest common denominator is what people will end up "needing." I mean, sure, 60 hz is nice for work, but you don't really "need" 60hz for work, you could probably use a 15 hz 10 inch monitor for work. Then the next guy will say, "Well, I can do all my work on an 8 inch 5 hz screen that is monochrome," so on, and so forth.

Just because XXXhz does not do anything for you or any other specific person does not mean it doesn't do anything for others. In the same vein, just because XXXhz does do something for you does not mean it does for everyone and thus becomes a requirement.
 
If we start talking about the subjective "needs" of people then the lowest common denominator is what people will end up "needing." I mean, sure, 60 hz is nice for work, but you don't really "need" 60hz for work, you could probably use a 15 hz 10 inch monitor for work. Then the next guy will say, "Well, I can do all my work on an 8 inch 5 hz screen that is monochrome," so on, and so forth.

Just because XXXhz does not do anything for you or any other specific person does not mean it doesn't do anything for others. In the same vein, just because XXXhz does do something for you does not mean it does for everyone and thus becomes a requirement.
No im not talking about needs vs wants. I’m just saying there’s a large segment of the population that doesn’t easily notice higher refresh rates just like many don’t easily recognize higher resolutions. For my vision in particular, I notice gains from resolution but gains in smoothness to me absolutely die after 90Hz. Nothing above that feels any better for me. If we constantly raise the baseline, we get to a point where we’re forcing people to buy stuff they don’t want or need.
 
No im not talking about needs vs wants. I’m just saying there’s a large segment of the population that doesn’t easily notice higher refresh rates just like many don’t easily recognize higher resolutions. For my vision in particular, I notice gains from resolution but gains in smoothness to me absolutely die after 90Hz. Nothing above that feels any better for me. If we constantly raise the baseline, we get to a point where we’re forcing people to buy stuff they don’t want or need.
That assumes products with lesser features cost less money to make. If the entire worlds manufacturers switch to 50/60hz minimum refresh rate manufacturing for monitors, making a native 30hz only screen would be more expensive, not less. If all the manufacturers switch to 120hz at some point, that will be the new level of cheapest products you can get. This is like complaining about the following: I only need a 1 core 1ghz processor, so why is it that the cheapest CPU I can get for my desktop is a 2 core Celeron at 3ghz, wouldn't it be cheaper to have a 1 core 1ghz CPU available? The answer is no, it would be more expensive to make these kinds of products at a price that makes sense then its worth allocating silicon to.
 
There really are no other real reasons. Saying a 5090 is not faster the faster card is about as dumb as a gets.
I didn't say it wasn't the fastest in general, i just said it depended on the workload, other than one or two regressions that will probably be fixed with a future driver update, im sure its faster just about all around. I also said its personally not worth it. Yes its better, its currently the best, but its still not a very good upgrade if you look at what Nvidia has launched historically. If you want to buy it, good on you, but I don't personally see how it should be praised as highly as it is, its evolutionary at best, not revolutionary. Its no Voodoo, TNT2, Geforce 256, Radeon 9700 Pro, GTX 8800, Radeon HD 5000, Radeon HD 7000, GTX 600, or GTX 1000. Heck its not even an RTX 2000, its just, meh at best.
 
  • Like
Reactions: YSCCC and Peksha
Well 5090 is a victim of staying in the same node process, it's kind of impressive they even managed to get whatever they got, even if much of it is simply brute force stuffing more transistors and going balls to the wall with new memory and big 512-bit bus.

I do think that people are a tad dismissive of the so-called "neural shaders", but I do think it will take a good few years before the underlying tech gets traction. It might be Turing all over again, when Series 20 came out with tensor cores enabling RT, DLSS and so on, but it took 3-6 years for this to mature at which point Series 20 was outdated and the initial tensor capacity was insufficient for much anything but DLSS and light RT.

IMO, Series 60 is where the real fun will be, both die shrink and second gen of these "neural shaders" with lessons learned on one hand, and more games around that actually use that advantage.
 
Last edited:
  • Like
Reactions: artk2219 and Peksha
Well 5090 is a victim of staying in the same node process, it's kind of impressive they even managed to get whatever they got, even if much of it is simply brute force stuffing more transistors and going balls to the wall with new memory and big 512-bit bus.

I do think that people are a tad dismissive of the so-called "neural shaders", but I do think it will take a good few years before the underlying tech gets traction. It might be Turing all over again, when Series 20 came out with tensor cores enabling RT, DLSS and so on, but it took 3-6 years for this to mature at which point Series 20 was outdated and the initial tensor capacity was insufficient for much anything but DLSS and light RT.

IMO, Series 60 is where the real fun will be, both die shrink and second gen of these "neural shaders" with lessons learned on one hand, and more games around that actually use that advantage.
Blackwell and Ada Lovelace are on different nodes... 4NP vs 4N.

*Edit: Only Blackwell's GB100 is on 4NP, all other Blackwell chips are manufactured on the 4N process.
 
Last edited:
  • Like
Reactions: artk2219
They are effectively same node. The node process is the same, it's just a minor optimization.

Until this generation you had a full blown node process shrinks. This is not the case here.
Yet the nodes are still different, and much more significantly different than you let off. I have already linked the detailed the differences in this section. Refer to post #164.

*Edit: Only Blackwell's GB100 is on the 4NP node, the rest of the Blackwell chips are manufactured on the 4N node.

This following line on wikipedia led me astray:

"Blackwell is fabricated on the custom 4NP node from TSMC. 4NP is an enhancement of the 4N node used for the Hopper and Ada Lovelace architectures. The Nvidia-specific 4NP process likely adds metal layers to the standard TSMC N4P technology."

Usually when an architecture is said to be fabricated on a node that is understood to mean the whole arch is, not just the one chip...
 
Last edited:
  • Like
Reactions: artk2219
That assumes products with lesser features cost less money to make. If the entire worlds manufacturers switch to 50/60hz minimum refresh rate manufacturing for monitors, making a native 30hz only screen would be more expensive, not less. If all the manufacturers switch to 120hz at some point, that will be the new level of cheapest products you can get. This is like complaining about the following: I only need a 1 core 1ghz processor, so why is it that the cheapest CPU I can get for my desktop is a 2 core Celeron at 3ghz, wouldn't it be cheaper to have a 1 core 1ghz CPU available? The answer is no, it would be more expensive to make these kinds of products at a price that makes sense then its worth allocating silicon to.
Right and it automatically cost more to build a 120Hz monitor than it does to make a 60Hz monitor. Therefore people are paying for something they don’t want or need. And you’re wrong about your CPU example too. It’s NOT more expensive to build a 1GHz single core cpu than a dual core 3GHz chip.
 
This following line on wikipedia led me astray:

"Blackwell is fabricated on the custom 4NP node from TSMC. 4NP is an enhancement of the 4N node used for the Hopper and Ada Lovelace architectures. The Nvidia-specific 4NP process likely adds metal layers to the standard TSMC N4P technology."
What's hilarious is that I fixed that Wiki page at one point! I said, "I have the Blackwell whitepaper and confirmation from Nvidia that Blackwell RTX 50-serie is still TSMC 4N." And then some moron undid my change! I wonder if we can see the edit history? Oh, we can. It was MrSwedishMeatballs:

1738087522341.png
 
  • Like
Reactions: helper800
What's hilarious is that I fixed that Wiki page at one point! I said, "I have the Blackwell whitepaper and confirmation from Nvidia that Blackwell RTX 50-serie is still TSMC 4N." And then some moron undid my change! I wonder if we can see the edit history? Oh, we can. It was MrSwedishMeatballs:

View attachment 380
That's actually a full circle moment... You fixed a line that someone erroneously unfixed afterward that led me to post misinformation only for you to see the source was something you already fixed... I will try and do better next time, but that is funny lol.
 
  • Like
Reactions: JarredWaltonGPU
I wonder if they will have the guts to release the full B202 for gamers. That'd be like 750w+ GPU.

I'll probably bite for 5090 after all for fun and then I'll be mad in 1/2 years from now at 5090Ti/6090. But I always do this 4-6 year cycle on (almost) top tier GPUs.

It's a shame there is no 5080Ti, that'd be the sweet spot.
 
I wonder if they will have the guts to release the full B202 for gamers. That'd be like 750w+ GPU.

I'll probably bite for 5090 after all for fun and then I'll be mad in 1/2 years from now at 5090Ti/6090. But I always do this 4-6 year cycle on (almost) top tier GPUs.

It's a shame there is no 5080Ti, that'd be the sweet spot.
Nvidia can define the power limit however it wants. Just because you have more SMs enabled doesn't mean you have to have a higher power limit. That's the essence of notebook GPUs. There's certainly room for a 5080 Ti / 5080 Super with a down-binned B202, though, and also a 5090 Ti / Super with a full GB202. (Or a Titan, but I suspect that won't happen for a variety of reasons.)