Nvidia DLSS4, MFG, and full ray tracing tested on RTX 5090 and RTX 5080

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I agree overall with the gist of your piece. But some quibbles follow.

>As with the original framegen, it's more about smoothing out the visuals than providing a true boost to performance.

I disagree with this statement, twice. Characterizing FG as "motion smoothing," while correct at face value, glosses over the point that FG provides more (temporal) visual content. Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.

Granted, this a bit of splitting hairs, but proper charactization is important, to understand the different facets of a concept. Calling FG "motion smoothing" only differs from calling it "fake frames" in degree, not type. You're reaching for a shortcut at the expense of discarding nuance.

My second disagreement is with your use of "true boost to performance." Reaching for "true perf" is like reaching for "fake frames" as an argument, ie if I like something, it's "true;" if I dislike it, it's "fake." It's a mental shortcut. Don't reach for shortcuts.

I take issue with this because "performance"--more precisely, responsiveness--can't be measured in "input samples per second" (IPS for short.) Responsiveness is just latency.

Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.

https://humanbenchmark.com/tests/reactiontime

I average about ~250ms, or about 4 IPS.

https://humanbenchmark.com/tests/aim

This aim/shoot test is more relevant, as you have to move mouse & click. I average ~600ms, or less than 2 IPS.

So, saying that IPS needs to be faster with higher FPS is ridiculous. What you're saying is that latency needs to be lower. You equate this to performance, OK. But it can't be characterized as "true" or "fake." Responsiveness is a sweet spot.

Quibbles aside.

My take of your piece is that it's a personal exploration into FG use, which is good. I think more people should try it and make up their own minds, instead of taking their cue from talking heads. But your piece has fairly limited relevance for readers, as you're taking top-tier GPUs and purposely slowing them down with full RT, then apply FG. Better is testing for the median case, eg with 4060/7600 using DLSS/FSR Balanced (no RT), then apply FG. This would be the widest use of FG, and most relevant to users.

IMO, no need to test different GPUs and compare them. Keep focus on FG and its utility. A single GPU should suffice. KISS.

Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.

In context of FG, people are used to matching FPS and IPS. FG's F/IPS mismatch will feel weird (read: worse). Understandable that some people will bail at first try and proclaim it bad. But outside of these forums, FG will follow upscaling in getting wider adoption, and at some point, becoming a default setting. People will acclimate to it, and will prefer it vs without.

To see this (over time), make a note of people's sentiment of FG now, and do it again same time next year.
 
Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.
Agreed. I'm certain it benefits your eyes' ability to track objects zipping across the screen.

Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.
Who said this, now?

The way to look at latency is that it stacks. Your reaction time isn't relative to the event, but relative to when the photons hit your retina. If an event happens at some time t, your reaction time is R, and the rendering pipeline + monitor add another term L, then your reaction actually occurs at the time t + L + R. Reduce L and you react sooner. Not much, but it might be enough to make the difference when your skill is at its limit.

Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.
I disagree with what I think your gist is, here. I don't want Jarred to get used to playing at high-latency before rendering a verdict. I want his initial and most visceral impression. That's actually the best take, here.

People don't care so much "can you get used to it?". They want to know purely how good or bad it is. People can get used to a lot of awful stuff!

In context of FG, people are used to matching FPS and IPS. FG's F/IPS mismatch will feel weird (read: worse). Understandable that some people will bail at first try and proclaim it bad. But outside of these forums, FG will follow upscaling in getting wider adoption, and at some point, becoming a default setting. People will acclimate to it, and will prefer it vs without.

To see this (over time), make a note of people's sentiment of FG now, and do it again same time next year.
I disagree that anything conferring so much latency will ever become the default and widely accepted. It's not only a "feel" thing, but also the quantitative impact latency has (i.e. t + L + R). Even Reflex 2 probably can't save it, since it still doesn't help with events that are truly surprising (i.e. enemy pops out from behind obstacle).

Also, hi. Even when I don't agree with them, your posts are always thoughtful and I appreciate that.
 
Great piece, it's good to see a reviewer take his distance with Nvidia guidelines on how to review their tech. I bought a 5080 because I desperately needed a GPU (coming from a 1070) and it was the only option available at that price range that I was willing to spend (and it'll remain so as AMD won't compete there). But I couldn't care less about MFG. It's nice to have but Nvidia is trying to pretend with their bar charts that a 5070 = 4090 or that 5090 = 2X 4090, which is egregious and false information. And so many reviewers don't question that at all. MFG, as you said, will take a good experience and turn it into a better one, granted your have the monitor for it. Very limited use case scenario and importantly, it will not help aging GPUs. DLSS is designed to sell cards not provide the benefits to gamers that Nvidia pretend it does.
 
Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.

https://humanbenchmark.com/tests/reactiontime

I average about ~250ms, or about 4 IPS.

https://humanbenchmark.com/tests/aim

This aim/shoot test is more relevant, as you have to move mouse & click. I average ~600ms, or less than 2 IPS.
I don't think anyone is saying that you need equal input latency improvement to perceived frame rate increase to get a benefit.
My second disagreement is with your use of "true boost to performance." Reaching for "true perf" is like reaching for "fake frames" as an argument, ie if I like something, it's "true;" if I dislike it, it's "fake." It's a mental shortcut. Don't reach for shortcuts.

I take issue with this because "performance"--more precisely, responsiveness--can't be measured in "input samples per second" (IPS for short.) Responsiveness is just latency.

...

Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.
Anything that negatively impacts input latency is going to make the experience worse whether or not someone notices it though.

Here's a couple of personal examples:
I'd used 60Hz displays for at least 15 years straight (I had a 75Hz CRT prior, but don't remember when I got rid of it), got completely used to it and it didn't seem problematic. After getting a 144Hz VA panel display it was like night and day because I still had a 60Hz screen as a secondary. What I didn't notice right away and it wasn't until I started playing some more high speed games, was the dramatic difference in input latency due to not only the higher refresh rate but the faster panel. It's around 4 years on and now it just seems normal to me, and when using a 60Hz panel I can tell right away when doing anything with movement.

Those tests you linked I tried several times for fun on both of my screens (CU34G2X and U2724D) running at their native refresh rates and the 144Hz display consistently had numbers 5-7% better than the 120Hz. Then I set both to 120Hz and the results were slightly closer together at 4-5%, but the faster panel still won out because the panel itself has lower latency. While neither display or refresh rate felt any different to me during the test the end results certainly were and while it's not a significant difference if I were playing something that required reaction times I'd be better off with the faster panel.

This isn't to say that frame generation is a bad technology, because it has plenty to offer, just that it's not a performance increase at all.
I disagree with this statement, twice. Characterizing FG as "motion smoothing," while correct at face value, glosses over the point that FG provides more (temporal) visual content. Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.
The benefit from frame generation here isn't going to offset the input latency cost of running it in the first place. The primary advantage from a high frame rate is when the enemy becomes visible and how quickly you can react not how smoothly they go across the screen.
 
  • Like
Reactions: P.Amini
@JarredWaltonGPU I think there are primarily 2 things to look at AI rendering solutions atm:

Latency. It is easy as you can measure when yoi take action to when pixel change on screen. Higher latency means the teach not suitable for quick action. It is straight forward to measure this.
Image quality: This is critical but harder to measure. The eye is not good tool for this as it is subjective. We can use computer graphic metrics that score similarity between 2 images or video so we can compare native rendering vs neural rendering. All the ghostling, shimerring ... will degrade image and reduce the score. I belive super resolution AI use these metrics in its loss function.

I think a single card is good enough for running all these tests. We may have a way to automate all the tests with AI that use computer.
 
This sucks. Possibly the worst generation ever. You get roughly 25% more performance for 25% more power and 25% more money. So it's the equivalent of an overclocked 4090. Aaaaand all this, after a whole to years. Zero progress, zero efficiency increase.
 
>The way to look at latency is that it stacks.

For competitive gaming, sure, I'd relent to the consensus that any added latency is bad. So anything that adds latency (queue FG) is a bad idea...generally speaking.

For non-competitive (single-player) gaming, where perf isn't necessarily a balls-to-the-walls consideration, the notion of adding lag in hundredths of a second to a gamer action that's an order of magnitude larger at tenths of a second, doesn't sound that big a deal. Taking all the latencies into account, the added lag from FG would likely be in the low single-digit percentage. We can quibble on the numbers, but clearly there exists the notion of "acceptable added lag," if only because there is already lag without FG.

My response to the "FG doesn't enhance gaming performance like Nvidia claimed" is SO WHAT? Gaming is not all about FPS performance or latency. If it were, RT would never exist. Aesthetics also matter, if not matter more.

FG is a bit of an odd duck, in that the extra frames both add to visual aesthetic as well as can be viewed as a perf enhancer, by virtue of increased framerate. Nvidia's claim is technically correct, because up to now FPS has been the main metric of gaming perf. In hindsight, it's a myopic take. Nvidia is just exploiting that shortcoming.

Anyway, I admit all this wrangling about what FG is or isn't is fairly pointless. We can't argue about what "feels better." We just need to try it and decide. IMO, the best way to view FG is that it's one more tool in the visual toolkit. People can use it, or not.

>I want his initial and most visceral impression. That's actually the best take, here.

Few would agree that judging things on first impression is a good idea.

>People don't care so much "can you get used to it?". They want to know purely how good or bad it is. People can get used to a lot of awful stuff!

IME, what's good-vs-bad, or fast-vs-slow, can be more of a state of mind. Not only can your perception change with acclimation and familiarization, the degree of change can be huge, far dwarfing the paltry increase in latency (in this instance).

Example: Most of us can recite the alphabet pretty much by reflex. Most of us also can't do it backwards, even though we know the sequence. We need practice to establish the (mental) muscle memory. Typing is another example. There's a vast speed difference between hunt-and-peck and touch-typing.

These are stark examples. I'm not saying that one needs to climb a learning curve to reap the benefits of a feature, but that the first time trying anything new is always the hardest and most awkward and frustrating. It's a poor basis on which to base your judgment.

>I disagree that anything conferring so much latency will ever become the default and widely accepted.

"So much latency" is a fairly squishy objection.

What's evident is that AMD & Intel will follow Nvidia's lead, and FG will become mainstream as did upscaling. It will continue to improve, and the lag will become "acceptable enough."

Per Parkinson's Law that says "Work expands to fill the time available for its completion," as FG become more popular, games will rely on it more as a crutch, and it will become default.
 
@JarredWaltonGPU I think there are primarily 2 things to look at AI rendering solutions atm:

Latency. It is easy as you can measure when yoi take action to when pixel change on screen. Higher latency means the teach not suitable for quick action. It is straight forward to measure this.
Image quality: This is critical but harder to measure. The eye is not good tool for this as it is subjective. We can use computer graphic metrics that score similarity between 2 images or video so we can compare native rendering vs neural rendering. All the ghostling, shimerring ... will degrade image and reduce the score. I belive super resolution AI use these metrics in its loss function.

I think a single card is good enough for running all these tests. We may have a way to automate all the tests with AI that use computer.
You can measure latency, image quality, and framerate. Those are distinct values. But what they mean isn't always clear cut. All three blend together and the whole isn't just a straight sum of the parts. It ends up being very subjective, which is what I hopefully managed to convey with this article.

Lower latency is better
Higher FPS is better
Higher image quality is better

But if I slightly increase (make worse) the latency, while improving the FPS, with a slight reduction in image fidelity... is that better, worse, or the same? It will ultimately depend on individual preferences as well as the degrees to which those things are increased or decreased, and there's no simple answer or metric that will definitively account for all three.
 
Yes, it's possible. Will I do it, soon? Meh. I'm 99% sure PCIe 4.0 won't matter. PCIe 3.0 (same bandwidth as PCIe 4.0 x8) probably won't matter either. Because all these frame generation things happen on the GPU and so PCIe isn't a factor.
This is for DLSS4. Many say that the image quality is so good that you can switch to the Performance profile instead of the Quality profile. But for many motherboards with PCIE 4x8 - and that's three generations of Intel 600/700/800 series chipsets and AMD 800 series in the case of installing an NVME SSD in the CPU slot - this will be a big problem, and you may not get much benefit.
And you don't need to run the game in Path Tracing for this) You can see for yourself the lack of scaling in some cases for low resolution, while the GPU is still loaded at 98-100%, and the CPU is not overloaded.
 
I agree overall with the gist of your piece. But some quibbles follow.

>As with the original framegen, it's more about smoothing out the visuals than providing a true boost to performance.

I disagree with this statement, twice. Characterizing FG as "motion smoothing," while correct at face value, glosses over the point that FG provides more (temporal) visual content. Aside from the "better feel" (aesthetic) aspect, FG has tangible benefit beyond just "smoothing," eg better target tracking for shooters.

Granted, this a bit of splitting hairs, but proper charactization is important, to understand the different facets of a concept. Calling FG "motion smoothing" only differs from calling it "fake frames" in degree, not type. You're reaching for a shortcut at the expense of discarding nuance.

My second disagreement is with your use of "true boost to performance." Reaching for "true perf" is like reaching for "fake frames" as an argument, ie if I like something, it's "true;" if I dislike it, it's "fake." It's a mental shortcut. Don't reach for shortcuts.

I take issue with this because "performance"--more precisely, responsiveness--can't be measured in "input samples per second" (IPS for short.) Responsiveness is just latency.

Saying IPS has to keep pace with FPS is ludicrous, because gamers can only react a few times per second. Here's quick test of reaction time.

https://humanbenchmark.com/tests/reactiontime

I average about ~250ms, or about 4 IPS.

https://humanbenchmark.com/tests/aim

This aim/shoot test is more relevant, as you have to move mouse & click. I average ~600ms, or less than 2 IPS.

So, saying that IPS needs to be faster with higher FPS is ridiculous. What you're saying is that latency needs to be lower. You equate this to performance, OK. But it can't be characterized as "true" or "fake." Responsiveness is a sweet spot.

Quibbles aside.

My take of your piece is that it's a personal exploration into FG use, which is good. I think more people should try it and make up their own minds, instead of taking their cue from talking heads. But your piece has fairly limited relevance for readers, as you're taking top-tier GPUs and purposely slowing them down with full RT, then apply FG. Better is testing for the median case, eg with 4060/7600 using DLSS/FSR Balanced (no RT), then apply FG. This would be the widest use of FG, and most relevant to users.

IMO, no need to test different GPUs and compare them. Keep focus on FG and its utility. A single GPU should suffice. KISS.

Secondly, in considering what "feels better," you need to consider acclimatization. Simply put, we feel better about a thing the more we're used to it. I watch YT vids on 1.5x or 1.75x; watching vids now on regular speed feels like slo-mo.

In context of FG, people are used to matching FPS and IPS. FG's F/IPS mismatch will feel weird (read: worse). Understandable that some people will bail at first try and proclaim it bad. But outside of these forums, FG will follow upscaling in getting wider adoption, and at some point, becoming a default setting. People will acclimate to it, and will prefer it vs without.

To see this (over time), make a note of people's sentiment of FG now, and do it again same time next year.
If you've ever played an electric guitar through an amp and effects emulation on your PC you will know 20 ms is a lifetime! Can't play guitar? So an even more challenging test for you, speak via a mic connected to your PC and listen to your own voice live (at the same time) via headphones, even most audio interfaces using ASIO will be too slow and you definitely can recognize THE LAG, every millisecond of it.