I'm talking about exclusively from the DLSS point of view. It was a failed attempt at AI-assisted AA, like it or not.
For everything else they use the Tensor cores for: cool, I guess. I have no idea why anyone would want their GPU using part of the die/power not helping render more frames, but I guess that's just me?
As for the rest of what you said. I don't disagree entirely. Sure, it is nice for people that can't upgrade and whatnot, but I will not believe for even a microsecond nVidia wanted this tech to work as an upscaling when they were pushing their DSR approach heavily. DLSS was meant to work in tandem with DSR, but it just turned out to be a happy mistake. Also, it's quite ironic they locked DLSS behind tensor cores alone and not pushed NIS before FSR came around when they could've if they wanted to "help those with older cards". Do not fool yourself there, come on.
Regards.
DLSS won't run well without higher compute than (most) regular GPU cores can provide, and it is very different from NIS or FSR in a lot of ways. All you have to do is look at it in action. The only truly comparable approach announced is Intel's XeSS. We'll see how XeSS compares when it comes out, and how it runs on the various GPUs. Sadly, it will only use Intel's version of tensor cores in full performance mode, while the non-Intel GPUs will use DP4a (integers) to try and accomplish the same thing. Will the end results look the same? I'm curious to see. Also curious how well XeSS will run in "compatible" DP4a mode. But fundamentally FSR is not DLSS, and it . It's spatial upscaling via a lanczos filter with extra edge detection and sharpening. We've had lots of upscalers for years, of varying quality and performance. None look as good as native, ever, but DLSS sometimes (not always) looks better.
FSR can look better than native with TAA, but only because TAA is a crappy blur-fest. FSR doesn't look better than native with AMD's CAS, because CAS is basically what FSR does, only without the upscaling. I actually like CAS way more than FSR, and hopefully AMD can work more on that sort of algorithm. Actually, we just need TAA to go away and maybe get replaced by something better.
DLSS was never EVER discussed as having anything to do with DSR, though. Digital Super Resolution was for people who had GPU power to spare and a way to get supersampling. DLSS was originally announced in two modes, the regular one we normally see, and DLSS 2x, which was just DLSS without the upscaling. DSR was always about running at higher than native and then downsampling to get superior image quality. DLSS might be a network trained on a similar idea I suppose, but it operates in reverse: take the base image and then try to determine what the original higher quality image would have looked like. In real time, at >60 fps.
It's a difficult problem, which is why it needs tensor cores. At 100 fps, trying to "intelligently" upscale from 1080p to 4K just isn't going to happen without a ton of compute. But look at some of the AI image upscalers on the web, that will take a source 640x360 image and turn it into a relatively nice looking 1920x1080 image. Sure, it's hard work and takes a server a minute or more, but that's because it's probably doing a bunch of images from lots of people. Anyway, I look forward to the day when that can all happen in real-time in a PC game.