What's the meaning of DLSS? Deep learning super sampling and Nvidia graphics cards explained.
What Is Nvidia DLSS? A Basic Definition : Read more
What Is Nvidia DLSS? A Basic Definition : Read more
Out of all post-processing techniques out there, I think this one is the most overengineered yet, haha.
I'm not saying that as a bad thing either. For all the crap I give to DLSS, I do find it quite cool technologically speaking. Even if I don't really like all the buzzword marketing shpeal around it!
Still, I am on camp "just lower the resolution and sliders to the right". Or just do subsampling (if the engine allows it) with regular MSAA. Plenty of engines out there of less known games implement it and it does improve things quite decently. I believe there's also some interesting techniques used by VR games now that I can quite remember, but are similar to those.
Cheers!
The thing that DLSS does that makes it stand out is that it reconstructs what the image would be at the desired output resolution. This is why in comparison screenshots DLSS rendered at 1080p then upscaled to say 4K can look as good as 4K itself. Non AI based upscaling algorithms can't recreate the same level of detail, especially when they're working with 50% of the pixels. I think Digital Foundry in their DLSS 2.0 video shows how much DLSS can reconstruct the image vs simple upscaling.Out of all post-processing techniques out there, I think this one is the most overengineered yet, haha.
I'm not saying that as a bad thing either. For all the crap I give to DLSS, I do find it quite cool technologically speaking. Even if I don't really like all the buzzword marketing shpeal around it!
The downside to this is a huge number of games in the past 10-15 year used deferred shading, which doesn't work well with MSAA. I'm pretty sure even forcing it in the driver control panel doesn't actually do anything if the game uses deferred shading. Hence why there was a push for things like FXAA and MLAA since they work on the final output, on top of being cheaper than MSAA and "just as good". Also as mentioned, MSAA is a pretty major performance hit still, so at some point you may as well have rendered at the native resolution.Still, I am on camp "just lower the resolution and sliders to the right". Or just do subsampling (if the engine allows it) with regular MSAA. Plenty of engines out there of less known games implement it and it does improve things quite decently. I believe there's also some interesting techniques used by VR games now that I can quite remember, but are similar to those.
DLSS looks worse than no AA imo. No matter how you slice it, you just can't improve image quality reliably w/o adding more samples, as w/ MSAA and SSAA. Despite being meant to improve over it, DLSS must inevitably suffer the same shortfalls of TXAA as a simple fact of it being engineered that way.Running 1080p with MSAA will not give you the same image quality compared to 4k with DLSS and the performance will be still worse, esp. with ray tracing.
If we're talking about a single image in time, then sure, DLSS won't get the same results reliably across every possible scenario. But I'd argue that instantaneous image quality isn't really as important as image quality over time. Human vision revolves around taking as many samples per second, though the kicker is we can see "high resolution" because the receptors in our eyes wiggle slightly which gives us super-resolution imaging.DLSS looks worse than no AA imo. No matter how you slice it, you just can't improve image quality reliably w/o adding more samples, as w/ MSAA and SSAA. Despite being meant to improve over it, DLSS must inevitably suffer the same shortfalls of TXAA as a simple fact of it being engineered that way.
If we're talking about a single image in time, then sure, DLSS won't get the same results reliably across every possible scenario. But I'd argue that instantaneous image quality isn't really as important as image quality over time. Human vision revolves around taking as many samples per second, though the kicker is we can see "high resolution" because the receptors in our eyes wiggle slightly which gives us super-resolution imaging.
I'd argue the aim NVIDIA has for DLSS is to mimic this, but using AI to better reconstruct the details rather than having a fixed algorithm that will more likely than not fail to reconstruct the right thing.
EDIT: Also I think developers misused what TXAA was supposed to resolve. It isn't so much a spatial anti-aliasing method, so it's not exclusively about removing jaggies. It's meant to combat shimmering effects because the thing covering a pixel sample point keeps moving in and out of it. If the image is perfectly still, TXAA does absolutely nothing.
At the end of the day, DLSS is just upsampling, so it will work with the lower resolution and then, the algorithm that makes it tick, will make "inference" (which is the marketing side of "deep learning") based on multiple frames to try and reconstruct a new, bigger, frame.Here is my question: In a racing game my opponent is so far behind me (okay, usually in front of me) he is just 2 pixels in size when viewed in 4K. Now I drop to 1080 and the car no longer appears at all because lowering the resolution means lowering (e.g. losing) information about the visual environment. So now we turn on DLSS 2.0 and is that "lost" information somehow restored and that 2 pixels somehow appears on my screen? The article seems to suggest that that is possible but I'm no so sure. This may seems like a silly question, because who cares about a 2 pixel detail; but was just an example for clarity.
I would say it depends on how much history of previous frames the system keeps. If it's say a 5-frame history and the car goes in and out every 5 frames, then it'd probably pop in and out. But if it's popping in and out every other frame or it's missing in one frame in a random frame, then it'll probably keep it.EDIT: This also brings an interesting question for border-case scenarios. So if a few frames have the car and others don't? Will it draw the car in the resulting frame or not? Will it be accurate to the real quivalent of the native resolution at all? For high paced action this may be important, but it may happen so fast that it could be a non-issue, but interesting to think anyway.
Think about headshots in Counter Strike, for example. Jumping targets moving randomly with not discernible history to make any real meaningful inference. Or teleporting enemies in scifi games (shooters or platform). Lightning effects? Any sort of fast strobbing light effect maybe?I would say it depends on how much history of previous frames the system keeps. If it's say a 5-frame history and the car goes in and out every 5 frames, then it'd probably pop in and out. But if it's popping in and out every other frame or it's missing in one frame in a random frame, then it'll probably keep it.
However I would argue that humans work better with more information over time than with a single unit of information in a fraction of a second. I don't think anyone, even at the top tier level, is going to notice or care of something appearing for just a frame when they're already running at 240 FPS or whatever. It also depends on the situation as well. For racing, split second decisions are made when the cars are right next to each other, not when the car is a pixel on the rear view mirror.
I don't think anyone is going to try to take pot shots at pixel sized targets and expecting to actually hit anything.Think about headshots in Counter Strike, for example. Jumping targets moving randomly with not discernible history to make any real meaningful inference. Or teleporting enemies in scifi games (shooters or platform). Lightning effects? Any sort of fast strobbing light effect maybe?
Regards.
You would be amazed XDI don't think anyone is going to try to take pot shots at pixel sized targets and expecting to actually hit anything.
I'm pretty sure you were also running at 800x600 or maybe 1024x768. Resolution was kind of lacking back then. Q3 also didn't have this thing called reloading.You would be amazed XD
This is coming from an avid Q3 player back in the day. If you see a pixel move, you shoot it.
Regards.
I wouldn't go too deep into CS, as they have some weird tricks in there to pull off some really weird stunts. Some I remember are AWP'ing from other side of walls based off sound! Plus most people play under 1080p still (myself included).I'm pretty sure you were also running at 800x600 or maybe 1024x768. Resolution was kind of lacking back then. Q3 also didn't have this thing called reloading.
If I'm playing something like CS or Modern Warfare, unless I'm a sniper or have a DMR, I'm not going to bother with targets that are a pixel in size. The weapon I have is going to be too inaccurate to score a hit and that's one fewer burst that I could use on something that I could actually hit with reasonable probability before reloading.
If you're playing at a lower resolution anyway for whatever reason, then there's no point in using DLSS. If, for example, you have a 4K monitor, I really have to question why you're playing at sub 1080p resolutions on it. You should've just gotten a sub 1080p monitor.Anyway, I digress... The point was if DLSS2's weak point would be a theorical loss of accuracy on moving targets on screen, which, it may be the case, but mostly irrelevant.