thestryker
Judicious
Non insignificant parts of the rendering pipeline are series of crutches. That's not the insult you seem to think it is. Reality is that if anyone expects to have high fidelity, frame rates and resolution FG of some sort is going to be necessary. While I wouldn't touch FG for a competitive game it can be a solid option for single player. Even a 5090 isn't going to be able to get anywhere near 240 fps in most new games, but 120 fps shouldnt be as much an issue. That's the type of situation where FG can improve the experience.not really, its a crutch, as is dlss. as i said above.. is nvidia's way to give us the performance we used to get, with out using tricks. I see my reply just above on this post.
FG and DLSS are not acceptable solutions for low performance, but just because nvidia likes to use deceptive marketing doesn't invalidate what they bring to the table.
This obviously isn't viable with the refresh rates monitors are capable of running. The only thing that could have realistically made the 5090 faster is using a more advanced node (there's also diminishing returns on node shrinks compared to the DUV days). If you think throwing more silicon at the problem is the solution then expect the already awful pricing to get worse. If a 5090 isn't capable then nothing below it will be.wouldnt a better way be to make a card that can do it with out fake frames and dlss
This is not to say I think nvidia is doing right by consumers with regards to performance increases down the stack, because they're not. There's plenty of room for improvement below the halo parts, but in terms of absolute performance it's just not possible.
Here's an example:
I went from a GTX 970 ($340 my cost) to a GTX 1660 Ti ($250 my cost) which was a node and a half with a node refinement in manufacturing process difference. This led to about a 100% performance increase with slightly less power consumption while using a smaller die, but it had a big clock speed gain.
Going from Samsung's 8nm node to TSMC 4N gave nvidia that similar clock speed gain and led to smaller die sizes which could have had good across the board generational performance increases (instead of just 4090) if nvidia hadn't shifted their stack.
There was no meaningful node difference between 40 and 50 series which caps the overall improvement possibilities. Doubly so since nvidia clearly wasn't willing to sacrifice margins to improve down stack performance. To get double the performance of a 3080 ($700 MSRP) it takes a 5090 ($2000 MSRP), the die is larger than the 3080 (had the 3080 not shared a die with the 3090 it would be an even bigger difference) and it uses a lot more power. That is a technology limitation not just nvidia being awful.