Alistairab,
First off, who cares if it's 1080p upscaled to 1440p if the end result is that it LOOKS like 1440p but runs at a higher FPS?
If NVidia put (1080p + DLSS) vs (1440p) then people would then scream "but, but, but it's not the same resolution" so I'm not sure what exactly NVidia's supposed to do about that.
DLSS isn't going anywhere likely. Ground truth analysis is pretty amazing.
As for the DIE SPACE being better used for traditional rasterization? You don't think NVidia's engineers are very aware of this? The PROBLEM is that rasterization has its limits and at some point the hardware needs to change so you can introduce some Ray-Tracing etc that can do things rasterization just can't do.
AI-based optimizations, ray-tracing additions, and other methods that can make better use of the added processing elements are the future. We should get that to varying degrees with the next-gen consoles using AMD's NAVI too and I expect NAVI on desktop to be similar to NVidia's RTX but probably less proportional die space for allocated to the newer compute units.
GAMING won't benefit much from DLSS?
There's no evidence to support that if you understand how the process works.
Finally, don't forget that the newer compute units (INT32, Tensor, RT) are also part of CUDA v10 so when some applications like VIDEO EDITING programs optimize for it there's the potential for a huge boost in performance compared to the same die size with traditional shader cores (i.e. NVidia Pascal)... I'm not certain but I think Turing GTX has INT32 but not RT and Tensor.