News What Is Nvidia DLSS? A Basic Definition

Out of all post-processing techniques out there, I think this one is the most overengineered yet, haha.

I'm not saying that as a bad thing either. For all the crap I give to DLSS, I do find it quite cool technologically speaking. Even if I don't really like all the buzzword marketing shpeal around it!

Still, I am on camp "just lower the resolution and sliders to the right". Or just do subsampling (if the engine allows it) with regular MSAA. Plenty of engines out there of less known games implement it and it does improve things quite decently. I believe there's also some interesting techniques used by VR games now that I can quite remember, but are similar to those.

Cheers!
 
Reactions: Phaaze88

escksu

Respectable
Aug 8, 2019
497
185
1,860
0
Out of all post-processing techniques out there, I think this one is the most overengineered yet, haha.

I'm not saying that as a bad thing either. For all the crap I give to DLSS, I do find it quite cool technologically speaking. Even if I don't really like all the buzzword marketing shpeal around it!

Still, I am on camp "just lower the resolution and sliders to the right". Or just do subsampling (if the engine allows it) with regular MSAA. Plenty of engines out there of less known games implement it and it does improve things quite decently. I believe there's also some interesting techniques used by VR games now that I can quite remember, but are similar to those.

Cheers!
Running 1080p with MSAA will not give you the same image quality compared to 4k with DLSS and the performance will be still worse, esp. with ray tracing.
 
Out of all post-processing techniques out there, I think this one is the most overengineered yet, haha.

I'm not saying that as a bad thing either. For all the crap I give to DLSS, I do find it quite cool technologically speaking. Even if I don't really like all the buzzword marketing shpeal around it!
The thing that DLSS does that makes it stand out is that it reconstructs what the image would be at the desired output resolution. This is why in comparison screenshots DLSS rendered at 1080p then upscaled to say 4K can look as good as 4K itself. Non AI based upscaling algorithms can't recreate the same level of detail, especially when they're working with 50% of the pixels. I think Digital Foundry in their DLSS 2.0 video shows how much DLSS can reconstruct the image vs simple upscaling.

Still, I am on camp "just lower the resolution and sliders to the right". Or just do subsampling (if the engine allows it) with regular MSAA. Plenty of engines out there of less known games implement it and it does improve things quite decently. I believe there's also some interesting techniques used by VR games now that I can quite remember, but are similar to those.
The downside to this is a huge number of games in the past 10-15 year used deferred shading, which doesn't work well with MSAA. I'm pretty sure even forcing it in the driver control panel doesn't actually do anything if the game uses deferred shading. Hence why there was a push for things like FXAA and MLAA since they work on the final output, on top of being cheaper than MSAA and "just as good". Also as mentioned, MSAA is a pretty major performance hit still, so at some point you may as well have rendered at the native resolution.

For VR you might be thinking of a combination of foveated rendering and variable rate shading (VRS). Foveated rendering doesn't really work outside of VR and VRS has been implemented in some games, but VRS can be hit or miss depending on which tier is used and it may not even provide that much of a performance benefit.
 
Reactions: JarredWaltonGPU

coolitic

Distinguished
May 10, 2012
682
11
19,015
5
Running 1080p with MSAA will not give you the same image quality compared to 4k with DLSS and the performance will be still worse, esp. with ray tracing.
DLSS looks worse than no AA imo. No matter how you slice it, you just can't improve image quality reliably w/o adding more samples, as w/ MSAA and SSAA. Despite being meant to improve over it, DLSS must inevitably suffer the same shortfalls of TXAA as a simple fact of it being engineered that way.
 
DLSS looks worse than no AA imo. No matter how you slice it, you just can't improve image quality reliably w/o adding more samples, as w/ MSAA and SSAA. Despite being meant to improve over it, DLSS must inevitably suffer the same shortfalls of TXAA as a simple fact of it being engineered that way.
If we're talking about a single image in time, then sure, DLSS won't get the same results reliably across every possible scenario. But I'd argue that instantaneous image quality isn't really as important as image quality over time. Human vision revolves around taking as many samples per second, though the kicker is we can see "high resolution" because the receptors in our eyes wiggle slightly which gives us super-resolution imaging.

I'd argue the aim NVIDIA has for DLSS is to mimic this, but using AI to better reconstruct the details rather than having a fixed algorithm that will more likely than not fail to reconstruct the right thing.

EDIT: Also I think developers misused what TXAA was supposed to resolve. It isn't so much a spatial anti-aliasing method, so it's not exclusively about removing jaggies. It's meant to combat shimmering effects because the thing covering a pixel sample point keeps moving in and out of it. If the image is perfectly still, TXAA does absolutely nothing.
 
I first started using DLSS 1.0 when Project Cars 1 came out back in 2015. It was a heavy hitter on GPU frame rates with ultra quality bars maxed out, especially at 1440p and up resolutions. Tied in with some in-game AA settings, I ran DLSS through the Nvidia control app and found a sweet spot between better graphics and high frame rates. I to this day do not understand why it was/is still admonished by many.
 

coolitic

Distinguished
May 10, 2012
682
11
19,015
5
If we're talking about a single image in time, then sure, DLSS won't get the same results reliably across every possible scenario. But I'd argue that instantaneous image quality isn't really as important as image quality over time. Human vision revolves around taking as many samples per second, though the kicker is we can see "high resolution" because the receptors in our eyes wiggle slightly which gives us super-resolution imaging.

I'd argue the aim NVIDIA has for DLSS is to mimic this, but using AI to better reconstruct the details rather than having a fixed algorithm that will more likely than not fail to reconstruct the right thing.

EDIT: Also I think developers misused what TXAA was supposed to resolve. It isn't so much a spatial anti-aliasing method, so it's not exclusively about removing jaggies. It's meant to combat shimmering effects because the thing covering a pixel sample point keeps moving in and out of it. If the image is perfectly still, TXAA does absolutely nothing.
No, I was specifically talking about over time. TXAA may "solve" shimmering, but it adds unfixable blur that DLSS tries to reduce, but ultimately can never fix as it is inherent to the technology. So, in other words, TXAA has the same problems and supposed benefits that non MSAA/SSAA techniques have in the spatial dimension, but also in the temporal dimension.
 

husker

Distinguished
Oct 2, 2009
1,041
83
19,360
0
Here is my question: In a racing game my opponent is so far behind me (okay, usually in front of me) he is just 2 pixels in size when viewed in 4K. Now I drop to 1080 and the car no longer appears at all because lowering the resolution means lowering (e.g. losing) information about the visual environment. So now we turn on DLSS 2.0 and is that "lost" information somehow restored and that 2 pixels somehow appears on my screen? The article seems to suggest that that is possible but I'm no so sure. This may seems like a silly question, because who cares about a 2 pixel detail; but was just an example for clarity.
 
Here is my question: In a racing game my opponent is so far behind me (okay, usually in front of me) he is just 2 pixels in size when viewed in 4K. Now I drop to 1080 and the car no longer appears at all because lowering the resolution means lowering (e.g. losing) information about the visual environment. So now we turn on DLSS 2.0 and is that "lost" information somehow restored and that 2 pixels somehow appears on my screen? The article seems to suggest that that is possible but I'm no so sure. This may seems like a silly question, because who cares about a 2 pixel detail; but was just an example for clarity.
At the end of the day, DLSS is just upsampling, so it will work with the lower resolution and then, the algorithm that makes it tick, will make "inference" (which is the marketing side of "deep learning") based on multiple frames to try and reconstruct a new, bigger, frame.

So, if in the source and using your example, the car is never in the rearview mirror, then DLSS won't have any information to extrapolate from in order to recreate the new frame with the car in it.

EDIT: This also brings an interesting question for border-case scenarios. So if a few frames have the car and others don't? Will it draw the car in the resulting frame or not? Will it be accurate to the real quivalent of the native resolution at all? For high paced action this may be important, but it may happen so fast that it could be a non-issue, but interesting to think anyway.

Regards.
 
EDIT: This also brings an interesting question for border-case scenarios. So if a few frames have the car and others don't? Will it draw the car in the resulting frame or not? Will it be accurate to the real quivalent of the native resolution at all? For high paced action this may be important, but it may happen so fast that it could be a non-issue, but interesting to think anyway.
I would say it depends on how much history of previous frames the system keeps. If it's say a 5-frame history and the car goes in and out every 5 frames, then it'd probably pop in and out. But if it's popping in and out every other frame or it's missing in one frame in a random frame, then it'll probably keep it.

However I would argue that humans work better with more information over time than with a single unit of information in a fraction of a second. I don't think anyone, even at the top tier level, is going to notice or care of something appearing for just a frame when they're already running at 240 FPS or whatever. It also depends on the situation as well. For racing, split second decisions are made when the cars are right next to each other, not when the car is a pixel on the rear view mirror.
 
I would say it depends on how much history of previous frames the system keeps. If it's say a 5-frame history and the car goes in and out every 5 frames, then it'd probably pop in and out. But if it's popping in and out every other frame or it's missing in one frame in a random frame, then it'll probably keep it.

However I would argue that humans work better with more information over time than with a single unit of information in a fraction of a second. I don't think anyone, even at the top tier level, is going to notice or care of something appearing for just a frame when they're already running at 240 FPS or whatever. It also depends on the situation as well. For racing, split second decisions are made when the cars are right next to each other, not when the car is a pixel on the rear view mirror.
Think about headshots in Counter Strike, for example. Jumping targets moving randomly with not discernible history to make any real meaningful inference. Or teleporting enemies in scifi games (shooters or platform). Lightning effects? Any sort of fast strobbing light effect maybe?

Regards.
 
Think about headshots in Counter Strike, for example. Jumping targets moving randomly with not discernible history to make any real meaningful inference. Or teleporting enemies in scifi games (shooters or platform). Lightning effects? Any sort of fast strobbing light effect maybe?

Regards.
I don't think anyone is going to try to take pot shots at pixel sized targets and expecting to actually hit anything.
 
You would be amazed XD

This is coming from an avid Q3 player back in the day. If you see a pixel move, you shoot it.

Regards.
I'm pretty sure you were also running at 800x600 or maybe 1024x768. Resolution was kind of lacking back then. Q3 also didn't have this thing called reloading.

If I'm playing something like CS or Modern Warfare, unless I'm a sniper or have a DMR, I'm not going to bother with targets that are a pixel in size. The weapon I have is going to be too inaccurate to score a hit and that's one fewer burst that I could use on something that I could actually hit with reasonable probability before reloading.
 
I'm pretty sure you were also running at 800x600 or maybe 1024x768. Resolution was kind of lacking back then. Q3 also didn't have this thing called reloading.

If I'm playing something like CS or Modern Warfare, unless I'm a sniper or have a DMR, I'm not going to bother with targets that are a pixel in size. The weapon I have is going to be too inaccurate to score a hit and that's one fewer burst that I could use on something that I could actually hit with reasonable probability before reloading.
I wouldn't go too deep into CS, as they have some weird tricks in there to pull off some really weird stunts. Some I remember are AWP'ing from other side of walls based off sound! Plus most people play under 1080p still (myself included).

Anyway, I digress... The point was if DLSS2's weak point would be a theorical loss of accuracy on moving targets on screen, which, it may be the case, but mostly irrelevant.

Regards.
 
Anyway, I digress... The point was if DLSS2's weak point would be a theorical loss of accuracy on moving targets on screen, which, it may be the case, but mostly irrelevant.
If you're playing at a lower resolution anyway for whatever reason, then there's no point in using DLSS. If, for example, you have a 4K monitor, I really have to question why you're playing at sub 1080p resolutions on it. You should've just gotten a sub 1080p monitor.

DLSS is really just for people who want higher image quality. Competitive gaming players don't really care about that.
 

ASK THE COMMUNITY