Nvidia DLSS4, MFG, and full ray tracing tested on RTX 5090 and RTX 5080

Admin

Administrator
Staff member
The time crunch for the launch of the RTX 5090 and 5080 cards meant we weren't able to fully investigate MFG, DLSS4, and full RT the way we'd like. We'll be updating those sections of the reviews, but here's the deeper dive into how the new technologies impact the gaming experience. It's not all sunshine and roses and doubled performance.

Nvidia DLSS4, MFG, and full ray tracing tested on RTX 5090 and RTX 5080 : Read more
 
@JarredWaltonGPU
Is it possible to do the same tests but with a pcie 4.0x8 connection? Maybe as part of another pcie bandwidth impact test.
Or maybe you can do some short tests privately and tell us something, like for pcie 5 x16 vs pcie 4x8 dlss 4 ultra perf/perf 1080/1440
 
  • Like
Reactions: evdjj3j
What we really need for frame generation is a way to link it to new user input. Something like Reflex 2 with its warping and in-painting combined with frame projection — the prediction of future frames. That's more complex, but if that can be done, games could actually feel more responsive rather than merely looking smoother. And that's probably where we're headed with DLSS 5 in the future. When and if that can be made to work effectively remains to be seen.
Doing exactly this has been the state of play for VR for half a decade now, it blows my mind that it has not been adopted for flatscreen gaming. It's certainly not a technical issue:
View: https://www.youtube.com/watch?v=VvFyOFacljg
 
@JarredWaltonGPU
Is it possible to do the same tests but with a pcie 4.0x8 connection? Maybe as part of another pcie bandwidth impact test.
Or maybe you can do some short tests privately and tell us something, like for pcie 5 x16 vs pcie 4x8 dlss 4 ultra perf/perf 1080/1440
Yes, it's possible. Will I do it, soon? Meh. I'm 99% sure PCIe 4.0 won't matter. PCIe 3.0 (same bandwidth as PCIe 4.0 x8) probably won't matter either. Because all these frame generation things happen on the GPU and so PCIe isn't a factor.
 
Brilliant piece, thank you for the in-depth review; this must have taken a great deal of time and effort.
MFG sounds like it’s in its infancy and has a lot to improve if it is to be useful; that is if MFG turns out to be a real innovation not a short lived experiment.
 
While the review is professional, what’s the point if the product is unavailable and likely never will be? Nvidia deliberately restricted production and supply, no preorders with queue, so every stock with few samples will always be bought by bots and scalpers.
 
  • Like
Reactions: palladin9479
Thanks for the down-to-earth view on FrameGen, Jarred. A very nice read for sure.

On a related note. I've been toying around with the Monster Hunter: Wilds benchmark* which is free to download and it does offer RT. I've been testing it in my laptop as I disassembled my PC for an upgrade, and I have to say I've noticed some very interesting differences in the rendering when having RT fully off and low/medium. My laptop gets ~45 FPS on RT low at 1440p with FSR3.1.3 on Quality, but the visual difference between no-RT and native rendering is very noticeable, but contrary to my own expectations, it was in favour of FSR3.1 upscaling with RT enabled looking much better. The textures and shadows look more stable and there's less weirdness with scene compositions if you look around for weird shadows, reflects or even overlapping effects.

If you have the time to give it a try, could you please check it out, Jarred? Maybe it's my laptop's drivers with the difference, but I found it interesting and a few other friends have also noticed it in both AMD and nVidia hardware.

Regards.
 
Last edited:
  • Like
Reactions: Sluggotg
@JarredWaltonGPU
Is it possible to do the same tests but with a pcie 4.0x8 connection? Maybe as part of another pcie bandwidth impact test.
Or maybe you can do some short tests privately and tell us something, like for pcie 5 x16 vs pcie 4x8 dlss 4 ultra perf/perf 1080/1440
Since Toms can't be bothered to do it, Techspot already has.

 
Why isn't MFG2x not tested on the 4090?
It's what "FG" means — it's not Multi Frame Generation 2X, but rather just "the only framegen the 40-series GPUs can do." I've asked Nvidia for clarification on whether games like CP77 are using the same algorithm on 40-series and 50-series or not, but I haven't received a response. I assume it's the same, other than the lack of flip metering hardware in RTX 40-series, but you know what they say about assuming...
 
While the review is professional, what’s the point if the product is unavailable and likely never will be? Nvidia deliberately restricted production and supply, no preorders with queue, so every stock with few samples will always be bought by bots and scalpers.
Huh!? It gives one a good perspective that if you have an RTX 4xxx GPU, you're not missing out on something you need to sell your left kidney for.
 
I just can't seem to care about frame gen/MFG. If they found a way to add user input to the generated frames I think that would go a long way to resolving a lot of my issues with the technology. Until then.... No thanks Nvidia.

Also, their marketing is killing off any trust I have in them as a company, MFG included.
 
Finally! Thanks for the DLSS4 review I've been waiting for.
So now I can say this generation is more like a sidegrade than real upgrade compared to the last generation, apart from 5090 which 99% of us cannot/will not buy.
Now lets see what RDNA4 will bring to the table next month. In fact I am even more curious and excited to see (if there is) a B770.
 
Appreciate this in depth look at DLSS 4 and look forward to hopefully seeing some performance/image quality testing with the transformer model across generations. The transformer model certainly seems better so it'll be interesting to see if the quality can be dropped a level on it to gain back performance losses and maintain the visual upgrade.

To me it seems like there are largely two issues with FG as a technology right now:
  • The added latency which while it's linear in scaling does require higher native frame rates in many applications.
  • The way dynamic images are dealt with can be extremely jarring with things like lighting.
The first one isn't particularly important from a usage situation, but has been extremely abused in marketing. Generally speaking so long as you can get a high enough base frame rate to be comfortable input latency wise it should be completely negated. The second is a lot bigger problem which definitely will require changes all of which may not be possible from a video card/software standpoint and may require games themselves being approached differently.

Where I see frame generation being a big deal is as we're seeing higher refresh rate displays. Personally speaking I'm good with 60-120 fps for gaming and am considering a 240Hz UW display so utilizing something like frame generation would make sense in a lot of titles. Seeing as there are already 240Hz 4k displays and this is likely to just get higher the technology itself is certainly here to stay.

I think it was GN who tested the old FG vs the new FG with a couple of the titles that required driver override for DLSS 4, but already had DLSS 3.0 support. It seemed like the new model was faster, and also running on the 40 series (unless there was something else at work).

@JarredWaltonGPU Just to follow up on the FrameView and whatnot from the other thread:

I don't think anything else can interface with PCAT. I think trying to get away from software and vendor solutions has likely driven the expansion of Elmor's testing equipment.

MsBetweenDisplayChange has been a part of PresentMon for a long time it's just that nvidia made unspecified changes to it and implied only FrameView works correctly. It seems like PresentMon shifted to the metrics being based on CPU as the reference point back when 2.0 was released.
 
Thanks for the testing + analysis, @JarredWaltonGPU !

The article said:
I've used the phrase "frame smoothing" repeatedly throughout this analysis, and for good reason. The AI-powered "generation" of "new" frames is really just a more sophisticated take on interpolation and frame smoothing, something we've seen in TVs for over a decade with varying levels of quality.
The problem I see with the terms "frame smoothing" and talking about "interpolation" is that it's unclear that you're talking strictly about the temporal domain. Image scaling (which DLSS also does) involves trying to interpolate individual frames in a way that's not too jaggy or arifact-laden (i.e. what some might refer to as "smooth").

I prefer to say "temporal intepolation" or "motion smoothing", when I'm talking about generating new samples in the time domain (i.e. frames).

The article said:
Putting that technology into real-time games ends up being less beneficial than it is for passively viewing content like TV broadcasts and movies.
Yes, not least because non-interactive video feeds can suffer a frame or two of latency with no real impact on the user experience (i.e. so long as HDMI's lipsync features are working as designed). The further into the future these motion interpolation techniques can look, the better the result usually is.
 
To me it seems like there are largely two issues with FG as a technology right now:
  • The added latency which while it's linear in scaling does require higher native frame rates in many applications.
  • ...
The first one isn't particularly important from a usage situation, but has been extremely abused in marketing. Generally speaking so long as you can get a high enough base frame rate to be comfortable input latency wise it should be completely negated.
If you enable Reflex 2, on top of MFG, how much does that negate the effect of the additional latency? Granted, Reflex 2 can never predict what enemies/opponents are going to do, but it should at least help in preserving the feeling of zero latency.
 
If you enable Reflex 2, on top of MFG, how much does that negate the effect of the additional latency? Granted, Reflex 2 can never predict what enemies/opponents are going to do, but it should at least help in preserving the feeling of zero latency.
I need to look into this more, because I’m not even sure what games actually support Reflex 2 right now. I know the demos that I saw at CES were mostly lighter esports games, which don’t need frame generation in the first place.

I’d be very curious to see things like Cyberpunk 2077 and Alan Wake 2 using Reflex 2, but I don’t even know if that’s in the works. Feels like a chicken and egg scenario.

Edit: Valorant and The Finals are apparently the only announced games so far with Reflex 2.

 
If you enable Reflex 2, on top of MFG, how much does that negate the effect of the additional latency? Granted, Reflex 2 can never predict what enemies/opponents are going to do, but it should at least help in preserving the feeling of zero latency.
I'm not sure that the frame warp can be used with frame generation and nothing I've seen gives any indication one way or the other. I believe the low latency portion is a requirement when implementing nvidia's frame generation.