Wow, that's quite a lot of opinion to base on so little information.
I know you staked out a strong position against DLSS, but how can you be so sure about 2.0? You haven't even
seen it!
It kind of sounds like it to me.
What part of the text I quoted from Nvidia's website sounds like it to you?
Even with a lot of AI training specifically for a given game, DLSS didn't look or perform nearly as well as simply upscaling with traditional methods and applying a good sharpening filter.
Deep Learning is complicated and still pretty new. I don't know where you found this confidence in people's ability to do something optimally, on the first try. I haven't seen any basis for it, in my time on this planet.
Since the updated sharpening filter that Nvidia recently implemented does just that while using traditional upscaling, it seems a bit convoluted to be using a method involving AI to accomplish the same thing.
Presumably, they think it looks better than their sharpening filter. Indeed, a simple sharpening filter will always have limitations and artifacts, so it's not hard for me to believe a convolutional neural network can do better.
According to them, they are still apparently using the Tensor cores to perform the upscaling, but if game-specific training isn't involved and performance has significantly improved, it seems logical that they have "dumbed down" the upscaling process to something a lot simpler,
Sometimes, you can find a simpler method that
also works better. The same is true of deep learning - you can sometimes find an architecture and a way of using it that both improves accuracy
and efficiency.
It's not only the
design of their network that could've changed, however. They also quite likely improved
training and are now using a loss function which doesn't penalize high-frequencies so severely.
I get the impression that the first-gen RTX cards simply don't have enough tensor cores to do the task adequately in real-time, or at least not any better than other methods.
Again,
how do you know? You're
clearly not a deep learning expert. Did you even ask one?
The 2080 Ti is capable of about 250 TOPS @ 8-bit. That's a
staggering amount of compute power. That's 67.8
Million OPS
per pixel, at the highest input resolution of 2560x1440, or 30.1 MOPS per output pixel @ 4k.
And sure, maybe they've improved it with this, but they could have just as easily improved it without the Tensor cores. It's possible that the next generation of graphics cards might have the Tensor performance to actually justify the use of AI in upscaling, but since it requires dedicated hardware, it needs to look substantially better than other upscaling methods at a given performance level to justify its existence.
You should wait and see it, before making such conclusions.
Now that you've taken such a strong line against DLSS 2.0, I cannot trust your opinion of it, once it's in the wild and you actually have a chance to evaluate what you've preemptively judged.
I have to say I'm disappointed. You're better than this.