Eh, it's IMHO not fair to make that comparison.
Traditional rendering has a lot of "tricks", but it's clever algorithms, optimization, approximations. It's reproducible and controllable, rather than the hallucinatory framegen and upscaling BS that can only regurgitate what it's already been fed, and visual styles it has been trained on.
Eh, texture compression has artifacts you don't necessarily want and can't always do much to control. Same with TAA. There are plenty more examples, I'm sure. A lot of the hacks in raster graphics have cases where they don't look right and there's not anything the developer can easily do to fix that.
I can appreciate having apprehension at the idea of some future DLSS version doing something different than how a game looked in testing, but DLSS is a feature that customers can disable, if they don't like how it looks. If a game dev
really doesn't like what DLSS is doing, they don't have to support it at all.
It's a lazy cop-out, and if this is what the industry keeps betting on... ugh.
But consumers are buying 4k monitors, like maybe for productivity purposes, even though they don't really need all of that detail when gaming. Are you really saying games need to render all of those pixels? When using conventional scaling methods, you generally get either a soft image or amplification of artifacts. DLSS is just saying: "instead of trying to hand-code some heuristics that deal well with certain types of images at the expense of looking bad on others, let's take a data-driven approach to find a set of heuristics that are more generally positive."
Instead of taking an ideological position against it, why not just look at the results! The end product is what you see. So, if it looks good, it
is good!
Even if all the visual artifacts get solved, we'll end up with even blander and samesy-looking games than we've got now.
Huh? At the end of the day, it's just a scaling (and frame interpolation) technology. The overall look of a game is still fully in the hands of its creators!
If we just accept the hallucinatory ML cop-out, innovation is dead.
Are you still talking about DLSS, because that doesn't make any sense to me. Now, if you're talking about neural rendering, I suppose I can at least understand why you might think that.