News Nvidia RTX 5060 is up to 25% faster than RTX 4060 with frame generation in new GPU preview

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
not really, its a crutch, as is dlss. as i said above.. is nvidia's way to give us the performance we used to get, with out using tricks. I see my reply just above on this post.
Non insignificant parts of the rendering pipeline are series of crutches. That's not the insult you seem to think it is. Reality is that if anyone expects to have high fidelity, frame rates and resolution FG of some sort is going to be necessary. While I wouldn't touch FG for a competitive game it can be a solid option for single player. Even a 5090 isn't going to be able to get anywhere near 240 fps in most new games, but 120 fps shouldnt be as much an issue. That's the type of situation where FG can improve the experience.

FG and DLSS are not acceptable solutions for low performance, but just because nvidia likes to use deceptive marketing doesn't invalidate what they bring to the table.
wouldnt a better way be to make a card that can do it with out fake frames and dlss
This obviously isn't viable with the refresh rates monitors are capable of running. The only thing that could have realistically made the 5090 faster is using a more advanced node (there's also diminishing returns on node shrinks compared to the DUV days). If you think throwing more silicon at the problem is the solution then expect the already awful pricing to get worse. If a 5090 isn't capable then nothing below it will be.

This is not to say I think nvidia is doing right by consumers with regards to performance increases down the stack, because they're not. There's plenty of room for improvement below the halo parts, but in terms of absolute performance it's just not possible.

Here's an example:
I went from a GTX 970 ($340 my cost) to a GTX 1660 Ti ($250 my cost) which was a node and a half with a node refinement in manufacturing process difference. This led to about a 100% performance increase with slightly less power consumption while using a smaller die, but it had a big clock speed gain.

Going from Samsung's 8nm node to TSMC 4N gave nvidia that similar clock speed gain and led to smaller die sizes which could have had good across the board generational performance increases (instead of just 4090) if nvidia hadn't shifted their stack.

There was no meaningful node difference between 40 and 50 series which caps the overall improvement possibilities. Doubly so since nvidia clearly wasn't willing to sacrifice margins to improve down stack performance. To get double the performance of a 3080 ($700 MSRP) it takes a 5090 ($2000 MSRP), the die is larger than the 3080 (had the 3080 not shared a die with the 3090 it would be an even bigger difference) and it uses a lot more power. That is a technology limitation not just nvidia being awful.
 
  • Like
Reactions: valthuer
That's not the insult you seem to think it is.
thats funny, i didnt say it was...

Even a 5090 isn't going to be able to get anywhere near 240 fps in most new games
and who and why do most people even need to have that many FPS ??

we seemed to get by ok before all these high frame rate monitors came out 🙂 100 fps was just fine... even with my 3060 im still getting at least 75 fps @ 1440p in most everything i play.... at full eye candy....

The only thing that could have realistically made the 5090 faster is using a more advanced node
If you think throwing more silicon at the problem is the solution
for both of these : then maybe its time nvidia went back to the start, and came up with a ground up new architecture ? oh wait... they wont, that are still to busy chasing the cash cow AI market to do much with gaming cards...

expect the already awful pricing to get worse
most know why nvidia charges what they do.. because they CAN. and sadly, AMD follows suit, its why most i know are still running vid cards that are at least 2 gens old... im still on a 3060

but in terms of absolute performance it's just not possible.
that tells me, their current architecture, is just out of gas


($2000 MSRP
too bad that price is BS... there hasnt been consistently MSRP prices since what, covid ? that 2k msrp is 2500 here... there are very few i know here, that can afford that for a video card, and if they can, they dont want to risk the power connector melting, among the other 50 series issues... as for the rest of the stack, they all say the same.. too over priced.... specially when you factor in the cards being a tier lower hardware wise vs what they are called....
 
and who and why do most people even need to have that many FPS ??

we seemed to get by ok before all these high frame rate monitors came out 🙂 100 fps was just fine...
1440p 240Hz displays start at about $230 USD and generally when people buy something they want to use it. Outside of extremely poorly optimized games (and heavy RT) it's been possible to get 100 fps for years.
even with my 3060 im still getting at least 75 fps @ 1440p in most everything i play.... at full eye candy....
Then you don't play anything new and/or games with heavy graphics.
for both of these : then maybe its time nvidia went back to the start, and came up with a ground up new architecture ? oh wait... they wont, that are still to busy chasing the cash cow AI market to do much with gaming cards...

...

that tells me, their current architecture, is just out of gas
I'm curious why you think that nvidia can magically squeeze way more performance out than they have been?

As far as I'm aware the only way they could massively increase raster performance without increasing die size/density would be by ditching things like tensor cores and RT cores which isn't going to happen for obvious reasons. It isn't like nvidia is sitting on a bad design that has lots of easy changes they can make.
too bad that price is BS... there hasnt been consistently MSRP prices since what, covid ? that 2k msrp is 2500 here...
You missed the point entirely if you're fixating on my usage of MSRP. I didn't use real world pricing for that one because neither card was released into logical pricing. The real world pricing for both varied to the extreme and isn't necessary for the comparison I'm making.
 
  • Like
Reactions: valthuer
1440p 240Hz displays start at about $230 USD
those start @ 400 here, and with the way prices are going up due to terrifs on pretty much everything, its possible most may not have the funds to get a new monitor, or the one they have is just fine. this is beside the point of the fact that video cards are over priced, so, in some way as you put it " generally when people buy something they want to use it. " so to spend 400+ on 1440p, 240 hz monitor, when they may not have a video card to use it... may not be a viable option.

Then you don't play anything new and/or games with heavy graphics.
not really, mainly because most of what is being made now.. just doesnt interest me. im sure im not the only one either, the games coming out now, just arent all that good, to be honest....
I'm curious why you think that nvidia can magically squeeze way more performance out than they have been?

As far as I'm aware the only way they could massively increase raster performance without increasing die size/density would be by ditching things like tensor cores and RT cores which isn't going to happen for obvious reasons. It isn't like nvidia is sitting on a bad design that has lots of easy changes they can make.
i didnt say that did i ? i said :
for both of these : then maybe its time nvidia went back to the start, and came up with a ground up new architecture ? oh wait... they wont, that are still to busy chasing the cash cow AI market to do much with gaming cards...
that tells me, their current architecture, is just out of gas
you claim i missed your point entirely, but yet you missed mine...

nvidia, needs a new architecture. if they have to use hacks, tricks and crutches to increase performance....

as for this :
if you're fixating on my usage of MSRP I didn't use real world pricing for that one because neither card was released into logical pricing. The real world pricing for both varied to the extreme and isn't necessary for the comparison I'm making.
for all intents and purposes, i agreed with you
 
you claim i missed your point entirely, but yet you missed mine...

nvidia, needs a new architecture. if they have to use hacks, tricks and crutches to increase performance....
I didn't miss your point at all. You're assuming this is possible when it very much does not seem to be. Nobody has been able to squeeze out more performance versus die area than nvidia. Perhaps someone will come up with a fundamentally new way of doing things, but there's zero evidence to support that being the case right now.

This is somewhat of a problem across the board for everything using advanced nodes. Everything keeps getting bigger, but in the case of GPUs they're reaching the limits which is likely why AMD tried multi-chip with the 7000 series higher end. If someone is able to crack that at a reasonable cost we might see gen on gen scaling similar to how it used to be.
 
  • Like
Reactions: Lamarr the Strelok
They choose to do hacks. They don't need to. Cheap on VRAM,constantly yammering about RT,etc.Now it's mandatory MFG to preview them.
But as far as a new node? No. I'm fairly sure 5090's are on the cutting edge of the newest stuff.I'm certain nvidia can get the best performance out of a gpu die in general.
If I get a 10 million dollar lathe and only use it to make wood bowling pins, that's my fault for not harnessing that power. Nvidia has invested so much in RT that it sucks a lot of the power of the node.
IMO of course. Nvidia is simply baffling sometimes.
 
You're assuming this is possible when it very much does not seem to be
oh why wouldnt it be ?

Nobody has been able to squeeze out more performance versus die area than nvidia
and yet here they are using hacks tricks and crutches to increase perforance... if nivida cant design a new architecture for a gpu, to keep increasing performance, then there is something wrong.

oh wait, why would they ? it seems to be known, and assumed, nvidia doesnt care about gamers any more, cause their cash how now is in AI, so why spend R&D on makeing better video cards, that could use less power, and perform better, when they can just give the market that helped them get to where they are, barely any performance increases, quite frankly, this sounds just like what intel did, before zen came out.....
 
and yet here they are using hacks tricks and crutches to increase perforance... if nivida cant design a new architecture for a gpu, to keep increasing performance, then there is something wrong.
So everyone is doing it wrong then?

If nvidia is providing the highest performance per area of everyone that means they have the highest performing architecture. So if something is wrong with the way nvidia is doing it then everyone else is doing it worse.

Is it more likely to be some sort of conspiracy that everyone is making bad architectures or that they're reaching diminishing returns?
 
If nvidia is providing the highest performance per area of everyone that means they have the highest performing architecture
and look at the power they need to use to get that performance... how much more power will the 60 series need to increase performance ? the 50 series, for the 5090, uses almost 600 watts !!!! they HAD to come up with a new power connector just to provide that power, and look what it is, a joke. melting connectors... the 60 series, will use what 650 watts ??? in contrast, 4090 needed 450 watts.... still needed a new power connector, and also melted...

this sure reminds me of intel, and what they needed to do to increase performance....

my point is.. like intel, nvidia also seems to need to come up with a new architecture. cause of the melting connectors, how many people that would of bought a 4090, or a 5090, havent, cause of those ? my boss as work was going to spend the the minimum of $3500 when the 5090 came out, then the posts and stories of more melting connectors came out, and he scapped that idea... sticking with his 380ti i think it is...

So if something is wrong with the way nvidia is doing it then everyone else is doing it worse.

Is it more likely to be some sort of conspiracy that everyone is making bad architectures or that they're reaching diminishing returns?
i didnt day that, did i ? this is the 3rd time ( i think ) you put words in my mouth. all i said, was maybe it is time for nvidia to, if they already havent, start work on a new architecture for their gaming gpus... but, i doubt they will, as it seems like other on here have said in various posts, they dont care about the gaming market any more, cause they are makeing what 4x that from AI

i think its safe to agree to disagree at this point...
 
I feel like frame gen is the new physx. It can be cool when it works well but its otherwise of waste of performance and gpu resources. At least DLSS is actually useful .The RTX 5060 losing to a 3070 ti from 2021 cant look good for nvidia. The gen on gen gains keeps declining.
 
and look at the power they need to use to get that performance...
5070 Ti is a bit faster than the 9070 XT and they have basically the same power draw rating, but real world it uses less. So the "problem" you're describing is hardly limited to nvidia.
how much more power will the 60 series need to increase performance ? the 50 series, for the 5090, uses almost 600 watts !!!! they HAD to come up with a new power connector just to provide that power, and look what it is, a joke. melting connectors... the 60 series, will use what 650 watts ??? in contrast, 4090 needed 450 watts.... still needed a new power connector, and also melted...
There was no node change from 40 to 50 series and the 5090 increased die size and around a third more cores. This ended up with around a 28% higher maximum power target.
i didnt day that, did i ? this is the 3rd time ( i think ) you put words in my mouth. all i said, was maybe it is time for nvidia to, if they already havent, start work on a new architecture for their gaming gpus...
So if you're not saying that nvidia making a new architecture will increase performance what are you saying?

They make architectural changes each generation, but this is the first since the 900 series that wasn't coupled with a node shrink. There's a limited amount they can do to increase performance without it and most of which requires more die space.

Obviously they could have eaten into their margins on SKUs below the 90 series with larger die size. This would have allowed them to have higher performance with lower power draw depending on how large they went. I'm sure we agree they should have done so, but they're pretty much monopoly and took the cheapest way out.
 
FG increases latency slightly when raising the frame rate whereas an actual increase in frame rate lowers latency. That means unless you have a good starting frame rate it's pretty worthless unless you're a fan of poor input latency. There are also problems with dynamic scenes and rendering, but those aren't universal.

FG is a great technology and as it gets better I have no doubt it'll see common usage to utilize very high refresh rates, especially at higher resolution, but it's not the universal improvement it's billed as. It also increases VRAM usage so nvidia's current cards with 8GB VRAM can be a problem.
All of these myths are false.
 
Resorting to insults suggests you don't have much to back your refutation of the point, which was that Nvidia is deliberately tying the hands of the reviewer to highlight a single feature improvement and present the improvement of that one feature as if it was an overall improvement.


Yes. ONE of the features. Pure rasterization performance is still an extremely important metric, and Nvidia is trying to hide that.



But that's not what the test is demonstrating. It is demonstrating a SINGLE feature to the exclusion of all others, with the intent that people will believe that the improvements of the rest of the features (such as pure rasterization, as well as ray-tracing) are equal in degree.



No, it would be like buying a car that can go 280 miles an hour, but that can hold cruise control as low as 15 MPH, when all other cars have the cruise control minimum at 25 MPH, and insisting that the testers do NOT test the top speed of the car, nor of the other cars being tested.



At this point, you're engaging in a straw-man argument.
Nonsense, your assertions are wrong or you have been mislead.