In the shot with DLSS enabled, the background and its vegetation look better than the screen captures with no AA or with TAA enabled.
You seem to have missed something big here, that I noticed immediately in the first two comparison shots, and again in that car image. The reason the background looks "better" in these stills, is that DLSS is effectively removing much of the depth of field effect. The backgrounds are
supposed to be blurry in those shots, because those parts of the scene are intended to be out of focus, to simulate a camera lens, giving the image some depth. Not being as blurry as it should be in those parts of the scene is another artifact that effectively makes the DLSS image quality worse. DLSS is applying a sort of sharpening filter to the upscaled output, and while that helps the image to look sharper than just a regular upscale, it has the side effect of also sharpening things that shouldn't be sharpened.
You should be able to see this well in that first comparison image of food when viewed at full size. With no AA, the central part of the image is sharp and in focus, but the background to the upper-right, as well as the edge of the tortilla in the foreground, both show soft focus effects, as they should. With TAA applied, the entire scene gets a bit blurry, though the out of focus areas are still relatively out of focus, maintaining some depth. Now in the DLSS image, the central part of the shot that is supposed to be sharp and in-focus is actually a lot blurrier than TAA. However, the background and foreground are actually sharper than they should be, since the sharpening filter has effectively removed most of the focal effect that was supposed to be there. The net result is that instead of having the subject of the image sharp and in focus, and the background and foreground blurred to provide depth and help make the subject stand out, everything is at roughly the same somewhat-blurry level of focus, making the DLSS image look flatter.
You can clearly see this artifact again in the "bending over" image, as well as in the car image. In both case, the trees in the background get sharper than they should be, while the subject of the image, the person or car, gets blurrier than even TAA. Some people may prefer to not have the depth of field effects, but in that case, turn them off. If the effect were disabled, you would clearly see that using no AA produces the sharpest image, TAA is somewhat blurrier but removes aliasing, and DLSS is significantly blurrier still. The only reason it looks "better" in some specific parts of some images, is that it's counteracting a graphical effect that's supposed to be there.
Now, presumably a game could apply depth of field after the upscale and sharpen process to avoid this removal of the effect. In that case, however, everything would be blurrier than TAA with the effect active, and I suspect that was not done for this demo, since Nvidia likely preferred to make at least
some parts of the scene look sharper than TAA, while providing better performance.
And of course, it sounds like DLSS will also provide the option for simulated supersampling, as its name implies, rather than just upsampling from a lower resolution. This should increase performance demands over rendering at native resolution though, but not as much as actual supersampling.
Finally, this is a technology that might be viable on entry-level Turing-based GPUs (as opposed to ray tracing, which requires a minimum level of performance to be useful), if those graphics processors end up with Tensor cores. We'd love to see low-end GPU play through AAA games at 1920 x 1080 based off of a 720p render.
Maybe, but with larger pixel sizes, the loss of detail should be even more noticeable than at these high resolutions. I guess it could potentially be good for real low-end hardware, where it might mean the difference between medium and high settings in a game, but it also brings into question how much cost it would add to the cards to include enough tensor cores to perform the upscaling to 1080p, and whether simply including more traditional cores might be better.
s1mon7 :
The way I see it, DLSS does the opposite of what truly matters in 4K after you actually get used to it and its pains, and I would not find it usable outside of really fast paced games where you don't take the time to appreciate the vistas. Those are also the games that aren't usually as demanding in 4K anyway, nor require 4K in the first place.
On the other hand, you could think of it as running a game at 1440p with upscaling to 4K in a way that looks better than traditional forms of upscaling. If you are gaming at 4K, I'm sure you encounter games that you simply can't run at max settings while maintaining smooth performance. From an image quality standpoint, I'm sure there are cases where running a game at 1440p with max settings will look better than running it at 4K with medium settings. Raytraced effects might be one such example of this, where it might simply not be practical to run those effects in a game at native 4K, but with DLSS rendering the base image at a lower resolution, could keep things running smoother. More resolution isn't all that matters for image quality, after all.