News Nvidia DLSS 2.0 Delivers Improved Speed, Quality and Compatibility

DLSS was a breakthrough, DLSS 2.0 takes it further
DLSS was useless, at least on current-gen RTX parts. It provided worse image quality and/or performance than simply using existing upscaling and image sharpening techniques that don't require any special hardware. Even Nvidia ended up releasing an updated sharpening filter last year that when coupled with upscaling made DLSS completely redundant.

This means that whereas DLSS 1.0 had to be implemented by the developer and trained specific to each game, DLSS 2.0 will work without additional training across a much wider range of games.
This tells us that there is likely no "deep learning" involved, and Nvidia simply gave up on DLSS, replacing it instead with basic upscaling, followed by the aforementioned updated sharpening filter. The tensor cores may not even be getting utilized for it anymore. DLSS was never "super sampling" which implies rendering at a higher resolution and scaling down, but rather the opposite. Now, it likely doesn't involve "deep learning" either, so the name is not at all representative of what it actually is.

However, it's good to see Nvidia recognize that DLSS was a failure, and replace it with something better. Having a simple-to-use means of upscaling to run demanding games on higher-resolution screens is a good thing. It's also nice to see that they gave it a few quality levels, so one can choose how much of a tradeoff to image quality they want to make a game run better. If this is using traditional techniques, there's no reason it couldn't run on AMD or Intel graphics hardware though.
 
This tells us that there is likely no "deep learning" involved, and Nvidia simply gave up on DLSS, replacing it instead with basic upscaling, followed by the aforementioned updated sharpening filter. The tensor cores may not even be getting utilized for it anymore.
That's not what they're saying.

NVIDIA DLSS 2.0 is a new and improved deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games. It gives you the performance headroom to maximize ray tracing settings and increase output resolution. DLSS is powered by dedicated AI processors on RTX GPUs called Tensor Cores.

https://developer.nvidia.com/dlss

Why are you so quick to write off the potential for AI in upscaling, just because their first attempt wasn't great? Man, if people stopped trying stuff when the first attempt doesn't work too well, humanity would still be limited to digging in the mud with sticks.
 
That's not what they're saying.
It kind of sounds like it to me. Even with a lot of AI training specifically for a given game, DLSS didn't look or perform nearly as well as simply upscaling with traditional methods and applying a good sharpening filter. DLSS 2.0 apparently doesn't require game-specific training, yet it is supposed to look and perform better than the original method. Since the updated sharpening filter that Nvidia recently implemented does just that while using traditional upscaling, it seems a bit convoluted to be using a method involving AI to accomplish the same thing. According to them, they are still apparently using the Tensor cores to perform the upscaling, but if game-specific training isn't involved and performance has significantly improved, it seems logical that they have "dumbed down" the upscaling process to something a lot simpler, and are relying on the improved sharpening filter to do the actual heavy lifting of making that upscaled content look decent.

Why are you so quick to write off the potential for AI in upscaling, just because their first attempt wasn't great?
I get the impression that the first-gen RTX cards simply don't have enough tensor cores to do the task adequately in real-time, or at least not any better than other methods. And sure, maybe they've improved it with this, but they could have just as easily improved it without the Tensor cores. It's possible that the next generation of graphics cards might have the Tensor performance to actually justify the use of AI in upscaling, but since it requires dedicated hardware, it needs to look substantially better than other upscaling methods at a given performance level to justify its existence.
 
Wow, that's quite a lot of opinion to base on so little information.

I know you staked out a strong position against DLSS, but how can you be so sure about 2.0? You haven't even seen it!

It kind of sounds like it to me.
What part of the text I quoted from Nvidia's website sounds like it to you?

Even with a lot of AI training specifically for a given game, DLSS didn't look or perform nearly as well as simply upscaling with traditional methods and applying a good sharpening filter.
Deep Learning is complicated and still pretty new. I don't know where you found this confidence in people's ability to do something optimally, on the first try. I haven't seen any basis for it, in my time on this planet.

Since the updated sharpening filter that Nvidia recently implemented does just that while using traditional upscaling, it seems a bit convoluted to be using a method involving AI to accomplish the same thing.
Presumably, they think it looks better than their sharpening filter. Indeed, a simple sharpening filter will always have limitations and artifacts, so it's not hard for me to believe a convolutional neural network can do better.

According to them, they are still apparently using the Tensor cores to perform the upscaling, but if game-specific training isn't involved and performance has significantly improved, it seems logical that they have "dumbed down" the upscaling process to something a lot simpler,
Sometimes, you can find a simpler method that also works better. The same is true of deep learning - you can sometimes find an architecture and a way of using it that both improves accuracy and efficiency.

It's not only the design of their network that could've changed, however. They also quite likely improved training and are now using a loss function which doesn't penalize high-frequencies so severely.

I get the impression that the first-gen RTX cards simply don't have enough tensor cores to do the task adequately in real-time, or at least not any better than other methods.
Again, how do you know? You're clearly not a deep learning expert. Did you even ask one?

The 2080 Ti is capable of about 250 TOPS @ 8-bit. That's a staggering amount of compute power. That's 67.8 Million OPS per pixel, at the highest input resolution of 2560x1440, or 30.1 MOPS per output pixel @ 4k.

And sure, maybe they've improved it with this, but they could have just as easily improved it without the Tensor cores. It's possible that the next generation of graphics cards might have the Tensor performance to actually justify the use of AI in upscaling, but since it requires dedicated hardware, it needs to look substantially better than other upscaling methods at a given performance level to justify its existence.
You should wait and see it, before making such conclusions.

Now that you've taken such a strong line against DLSS 2.0, I cannot trust your opinion of it, once it's in the wild and you actually have a chance to evaluate what you've preemptively judged.

I have to say I'm disappointed. You're better than this.
 
Last edited:
What part of the text I quoted from Nvidia's website sounds like it to you?
They also refer to it as "super-sampling" right in its name, but that's the marketing team's way of describing what is actually the opposite of super-sampling.

Presumably, they think it looks better than their sharpening filter. Indeed, a simple sharpening filter will always have limitations and artifacts, so it's not hard for me to believe a convolutional neural network can do better.
We can be pretty sure that they are still using the sharpening filter to make the upscaled output look decent, only now they are using the new, more advanced sharpening filter, rather than the mediocre old one. That's likely the biggest change here. DLSS without a postprocess sharpening filter looked incredibly blurry, and sharpening was the only thing making some of the later implementations look half-decent. With the better sharpening filter, they can get away with lower-quality (or arguably no) AI-based upscaling, hence why game-specific training is no longer needed, and why performance has improved.

You should wait and see it, before making such conclusions.
As the article states, this isn't exactly something that's brand new, they are just marketing it as "2.0" now. Wolfenstein: Youngblood and Deliver Us The Moon have already been using this implementation for a while.

In any case, I haven't actually taken a strong stance against DLSS, or at least the new implementation. From what I've seen, it appears to work about on par with other decent upscaling and sharpening methods now, and it should be pretty straightforward to use. It is still questionable whether it's doing anything that actually "requires" Tensor cores though, seeing as other methods still achieve similar results. I get the impression that Nvidia is simply keeping that as a requirement in order to push RTX cards, even if the hardware offers minimal benefit to the finished output.
 
They also refer to it as "super-sampling" right in its name, but that's the marketing team's way of describing what is actually the opposite of super-sampling.
Yes, it's a fair point. I imagine they'd say they're using Deep Learning to infer what super-sampled output would look like, but you're right about that.

We can be pretty sure that they are still using the sharpening filter to make the upscaled output look decent,
Why do you assume that a neural network can only produce soft output? I think the softness of DLSS was an artifact of the loss function they used for training it. There's nothing fundamental about neural networks that would tend to produce a soft output.

With the better sharpening filter, they can get away with lower-quality (or arguably no) AI-based upscaling, hence why game-specific training is no longer needed, and why performance has improved.
Okay, you've gone beyond obstinate. I've tried my best to explain, but it's beginning to feel like a lost cause. If you want to believe it's just a conventional sharpening filter, be my guest.
 
Okay, you've gone beyond obstinate. I've tried my best to explain, but it's beginning to feel like a lost cause. If you want to believe it's just a conventional sharpening filter, be my guest.
Based on everything I've seen about DLSS, yes, I fully believe they are using postprocess sharpening on the upscaled output. Everything seems to indicate that, including what the final image looks like up close. Such sharpening routines have been shown to be a good way of making upscaled content look decent with a very low performance impact, looking and performing notably better than prior implementations of DLSS, so it only makes sense that they would utilize that for the update. They did implement their improved sharpening routine not too long before games started utilizing this updated DLSS, after all.

There probably is AI processing still going on for the upscaling part of the process, though likely in a simplified form, and it may even handle certain things a little better than other raw, unsharpened upscaling techniques, but the bulk of the visual improvement here over the prior implementation is most likely coming from improvements to post-process sharpening.
 
Anandtech has a better article, including screen shots and some details about how it works.

It's not a bad article, and at least acknowledges that DLSS 1.x had significant problems and was outperformed by traditional upscaling and sharpening techniques, rather than this article claiming that "DLSS was a breakthrough" and making it sound like it was already unmatched in it's original form, while not even acknowledging the reasons why Nvidia felt a reboot was necessary. This Tom's article reads more like a marketing piece than anything trying to be informative, which was the main issue I had with it, and why my first post might come off as a bit negative.

Still, I don't see anything in that AnandTech article going against the suggestion that sharpening is being performed on the upscaled output post-process. Yes, the Tensor cores are apparently still performing the actual upscaling, however, that's not necessarily the only stage of "DLSS", as it may be a multi-step process. First, there's the faster, simplified Tensor upscaling, likely followed by another step involving sharpening, which can be efficiently performed on traditional graphics hardware. To the developer, the process likely looks like a single step at the end of the rendering process, but that doesn't mean there are not separate sub-steps being performed on different hardware within it.
 
Still, I don't see anything in that AnandTech article going against the suggestion that sharpening is being performed on the upscaled output post-process.
No, it doesn't and I never said they're not doing it, either. My point was that there's not anything intrinsic about using a neural network that would lead to a blurry output, as you seem convinced must be the case. You're extrapolating from an example of 1.

The details about it using motion vectors and having to be integrated at the source-level are no doubt also interesting.
 
  • Like
Reactions: digitalgriffin