Nvidia's DLSS Technology Analyzed: It All Starts With Upscaling

hixbot

Distinguished
Oct 29, 2007
818
0
18,990
1
OMG this might be a decent article, but I can't tell because of the autoplay video that hovers over the text makes it impossible to read.
 

richardvday

Prominent
Sep 23, 2017
148
0
710
16
I keep hearing about the autoplay videos yet i never see them ?
I come here on my phone and my pc never have this problem. I use chrome what browser does that
 

bit_user

Splendid
Ambassador

Thank you! I was waiting for someone to try this. It seems I was vindicated, when I previously claimed that it's upsampling.

Now, if I could just remember where I read that...
 

bit_user

Splendid
Ambassador

You only compared vs TAA. Please compare against no AA, both in 2.5k and 4k.
 

bit_user

Splendid
Ambassador

I understand what you're saying, but it's incorrect to refer to the output of an inference pipeline as "ground truth". A ground truth is only present during training or evaluation.

Anyway, thanks. Good article!
 

bit_user

Splendid
Ambassador

That's not what I see. Click on the images and look @ full resolution. Jagged lines and texture noise are readily visible.


If you read the article, DLSS @ 4k is actually faster than no AA @ 4k.


Depending on monitor size, response time, and framerate. Monitors with worse response times will have some motion blurring that helps obscure artifacts. And, for any monitor, running at 144 Hz would blur away more of the artifacts than at 45 or 60 Hz.
 

s1mon7

Proper
Oct 3, 2018
94
4
135
0
Using a 4K monitor on a daily basis, aliasing is much less of an issue than seeing low res textures on 4K content. With that in mind, the DLSS samples immediately gave me the uncomfortable feeling of low res rendering. Sure, it is obvious on the license plate screenshot, but it is also apparent on the character on the first screenshot and foliage. They lack detail and have that "blurriness" of "this was not rendered in 4K" that daily users of 4K screens quickly grow to avoid, as it removes the biggest benefit of 4K screens - the crispness and life-like appearance of characters and objects. It's the perceived resolution of things on the screen that is the most important factor there, and DLSS takes that away.


The way I see it, DLSS does the opposite of what truly matters in 4K after you actually get used to it and its pains, and I would not find it usable outside of really fast paced games where you don't take the time to appreciate the vistas. Those are also the games that aren't usually as demanding in 4K anyway, nor require 4K in the first place.

This technology is much more useful for low resolutions, where aliasing is the far larger problem, and the textures, where rendered natively, don't deliver the same "wow" effect you expect from 4K anyway, thus turning them down a notch is far less noticeable.
 

s1mon7

Proper
Oct 3, 2018
94
4
135
0


Yet DLSS looks very clearly lower res, even without zooming in. I'd argue that the vast majority of users would be far less likely to worry about jagged lines on non-AA 4K content than lowered perceived image resolution. The only case where this is not true is in really fast paced games or if someone uses their large-screen TV as a monitor up-close.

Otherwise, DLSS makes 4K content look like it's not really 4K content, because it really isn't. It's just good upscaling, still with lower res image.
 
Oct 28, 2018
1
0
10
0
Can you guys test if it add input lag or not... Because I feel it does since it work on every frame more
And can it be used effectively in 1080p or 2k to get more fps?
 

rantoc

Distinguished
Dec 17, 2009
1,859
0
19,780
0
Few images seems to have been done in motion - Where TAA shows its ugly face and it would be interesting to see how stable DLSS is in that regard.
 
Oct 28, 2018
1
0
10
0
How is the upscaling part still confusing for tech sites? Jensen even mentioned that they used it, in his own keynote, when the technology was launched... People just associate Super Sample with antialiasing. But the term is correct, since the technique uses samples, for it to work.

And of course you will get more fps if you render a lower resolution, and then upscale it, compared to render a higher resolution. nVIDIA did provide a chart, that showed that DLSS would provide between 30-50% more fps.

And who cares if the first frame in a new scene, have a lower resolution. It is only showed for 17 milliseconds @ 60fps...

Also all the previous bashing of RTX and how they are not their money. DLSS is super easy to implement. It took the FF15 team 1week to do it. It provide almost indistinguishable quality from native 4K, gives 30-50% more fps (on top of the already better architecture = more fps than Pascal).
nVIDIA actually gave people what they wanted, which was 4K @ 60fps...
 

uglyduckling81

Distinguished
Feb 24, 2011
719
0
19,060
30


Install NoScript and make sure everything is blocked except TH site itself. You won't see that horrible video.

Edit: Also I saw somewhere that DLSS renders at 1800p and up scales.
 

bit_user

Splendid
Ambassador

What I recall him saying is that supersampling was used to create the ground truth (something about 64 jittered samples per pixel, IIRC). The deep learning model then serves the purpose of inferring what the supersampled output would be, based on a non-supersampled input.

If there's anywhere he actually said it renders at a lower resolution than the target, please tell us what time in the presentation he said that (preferably via timestamped youtube link).


Traditionally, but using methods much, much cheaper than DLSS. DLSS involves probably between 100x and 1000x the amount of computation of something like bicubic interpolation. So, it's not a given that the time to run DLSS would be less than the difference between rendering at its input resolution and native.


They were comparing it with TAA. They never compared it with native 4k @ no AA, so you couldn't tell if it was faster just because TAA was so expensive.


Your eyes are surprisingly good at detecting certain types of changes in images. Once you start to notice a pop or shift between the first and second frames, you might start to feel that you can no longer ignore it. I'm just saying it could get annoying - especially if you're playing something that runs at a lower framerate.


They were probably also using Nvidia's GameWorks SDK. Game engines that don't use it might have to forego this feature, entirely. I don't know if that's true, but you can imagine Nvidia trying to use this as leverage to make developers buy into their SDK ecosystem and further disadvantage AMD hardware.


But people want it for all titles. Existing and those upcoming titles not built on Nvidia's SDKs.

DLSS is a trick. It's a darn good one, but it's still a trick (or hack, if you prefer). And as such, it has downsides relative to native 4k.
 
In the shot with DLSS enabled, the background and its vegetation look better than the screen captures with no AA or with TAA enabled.
You seem to have missed something big here, that I noticed immediately in the first two comparison shots, and again in that car image. The reason the background looks "better" in these stills, is that DLSS is effectively removing much of the depth of field effect. The backgrounds are supposed to be blurry in those shots, because those parts of the scene are intended to be out of focus, to simulate a camera lens, giving the image some depth. Not being as blurry as it should be in those parts of the scene is another artifact that effectively makes the DLSS image quality worse. DLSS is applying a sort of sharpening filter to the upscaled output, and while that helps the image to look sharper than just a regular upscale, it has the side effect of also sharpening things that shouldn't be sharpened.

You should be able to see this well in that first comparison image of food when viewed at full size. With no AA, the central part of the image is sharp and in focus, but the background to the upper-right, as well as the edge of the tortilla in the foreground, both show soft focus effects, as they should. With TAA applied, the entire scene gets a bit blurry, though the out of focus areas are still relatively out of focus, maintaining some depth. Now in the DLSS image, the central part of the shot that is supposed to be sharp and in-focus is actually a lot blurrier than TAA. However, the background and foreground are actually sharper than they should be, since the sharpening filter has effectively removed most of the focal effect that was supposed to be there. The net result is that instead of having the subject of the image sharp and in focus, and the background and foreground blurred to provide depth and help make the subject stand out, everything is at roughly the same somewhat-blurry level of focus, making the DLSS image look flatter.

You can clearly see this artifact again in the "bending over" image, as well as in the car image. In both case, the trees in the background get sharper than they should be, while the subject of the image, the person or car, gets blurrier than even TAA. Some people may prefer to not have the depth of field effects, but in that case, turn them off. If the effect were disabled, you would clearly see that using no AA produces the sharpest image, TAA is somewhat blurrier but removes aliasing, and DLSS is significantly blurrier still. The only reason it looks "better" in some specific parts of some images, is that it's counteracting a graphical effect that's supposed to be there.

Now, presumably a game could apply depth of field after the upscale and sharpen process to avoid this removal of the effect. In that case, however, everything would be blurrier than TAA with the effect active, and I suspect that was not done for this demo, since Nvidia likely preferred to make at least some parts of the scene look sharper than TAA, while providing better performance.

And of course, it sounds like DLSS will also provide the option for simulated supersampling, as its name implies, rather than just upsampling from a lower resolution. This should increase performance demands over rendering at native resolution though, but not as much as actual supersampling.


Maybe, but with larger pixel sizes, the loss of detail should be even more noticeable than at these high resolutions. I guess it could potentially be good for real low-end hardware, where it might mean the difference between medium and high settings in a game, but it also brings into question how much cost it would add to the cards to include enough tensor cores to perform the upscaling to 1080p, and whether simply including more traditional cores might be better.


On the other hand, you could think of it as running a game at 1440p with upscaling to 4K in a way that looks better than traditional forms of upscaling. If you are gaming at 4K, I'm sure you encounter games that you simply can't run at max settings while maintaining smooth performance. From an image quality standpoint, I'm sure there are cases where running a game at 1440p with max settings will look better than running it at 4K with medium settings. Raytraced effects might be one such example of this, where it might simply not be practical to run those effects in a game at native 4K, but with DLSS rendering the base image at a lower resolution, could keep things running smoother. More resolution isn't all that matters for image quality, after all.
 

bit_user

Splendid
Ambassador

This judgement is too selective. There are some very nicely anti-aliased edges in the DLSS output that look notably better than TAA and (of course) no-AA native res.


They don't provide such an option. I think the rationale behind the name is that it was trained on a supersampled ground truth. They intend that it already looks supersampled. To some extent, I think they're right.

Don't just look at edges, but also at details in the texture. DLSS cleans up a lot of noise, there, some of which you really can't claim was intentional.

In the end, a true verdict depends on gameplay. Do let us know if/when you actually try it in person. I don't even trust Twitch/youtube videos, since the video compression blurs a lot of fine details and adds artifacts of its own.
 

From an image quality standpoint ignoring the performance gains, DLSS doesn't look particularly good in this implementation compared to TAA. The edges might be softened, but that's because everything has been softened. Everything that's supposed to be sharp looks quite muddy here, and looking at the areas of the scene where things are supposed to be in focus, it clearly looks worse than TAA. The purpose of DLSS in this demo is to improve performance at the cost of image quality.


I think you may be wrong on this. When they announced the RTX cards, I'm pretty sure it was mentioned that DLSS could be used to improve image quality or performance. And there's no reason for such an implementation not to work. This implementation renders only half the pixels and upscales the results to improve frame rates at the expense of image quality, but you could likewise render the scene at native resolution, use DLSS to double the pixels, then scale that back down again. Actually here, I found something in an Nvidia article that seems to imply that...

https://news.developer.nvidia.com/dlss-what-does-it-mean-for-game-developers/

Question: Will consumers be able to see the difference DLSS makes?

Answer: Absolutely! The difference in both frame rate and image quality (depending on the mode selected) is quite pronounced. For instance, in many games that we’re working on, DLSS allows games to jump to being comfortably playable at 4K without stutters or lagged FPS.
Notice the "depending on the mode selected" part. So, I think we'll also see this used for actual supersampling, at a reduction in performance over native resolution, even if this limited tech demo didn't do that.


Again, everything looks too blurry to tell. I did notice that the character's hair looks softer and less pixelated with DLSS than with TAA, due to the process being applied indiscriminately across the entire scene, but the hair, and the entire character in general looks much blurrier, with the surface texture of his leather jacket completely lost. And the aliasing and loss of detail on the car looks much worse than the TAA example. Plus, as I previously pointed out, the backgrounds have had their focus effects improperly removed in this implementation by what is effectively a sharpening routine. The occasional jagged pixels getting past TAA is arguably less of a concern than having everything appear a bit muddy and flat.

Of course, the performance gains could still make this worthwhile, as it likely looks better than other means of upscaling a lower resolution render target. And if it can provide a supersampling equivalent at a reduced performance impact, that could be good as well, and might actually provide some notable image quality gains over something like TAA.
 

bit_user

Splendid
Ambassador

No, they are not saying it reduces quality. Show me where they ever said that.

Moreover, in the link you provided, they do explicitly state what I recall - that it's always trained on a supersampled ground truth @ the output resolution.
During training, the DLSS model is fed thousands of aliased input frames and its output is judged against the “perfect” accumulated frames. This has the effect of teaching the model how to infer a 64 sample per pixel supersampled image from a 1 sample per pixel input frame.
That's why they call it Deep Learning Super Sampling - because deep learning is used to achieve the effect of supersampling, instead of doing it by brute force.


That's just silly. Why would you scale it up, and then back down? You would just train a DLSS filter that accepts native resolution input, instead of input at a lower resolution. But the model would still be trained on supersampled ground truth, and infer what that would look like.



The "mode" is probably referring to this bit:
DLSS is also flexible enough to allow developers to choose the level of performance and resolution scaling they wish rather than being locked to certain multiples of the physical monitor or display size
So, they mean depending on the ratio of input to output resolution.
 

You seemed to be saying in your previous post that they won't provide an option to apply DLSS to a native resolution render to add additional samples for simulated supersampling, and I was simply pointing out how it could be used to achieve that goal, not that they would necessarily use that exact method. Undoubtedly there's more efficient things they could do to accomplish a similar result. Already, Nvidia's cards support Dynamic Super Resolution though, which is traditional supersampling, so at the very least, performing this process could be a matter of doing just that. On a 1440p screen, you could start with a native resolution render and use DLSS to fill in details for a 4K render requiring less performance than true 4K, then DSR could scale that back down to 1440p. The obvious reason for doing that would be that you won't lose detail compared to the native resolution render like you do here.

As it stands, DLSS as implemented in this demo does not offer what I would consider to be "supersampled quality", and it is clearly a reduction in quality in most ways over even TAA. The whole point of the demo was to show how Nvidia's cards could use DLSS to provide higher frame rates at a "similar" quality level to TAA (which itself tends to be a bit blurry), not to show off better image quality. Better image quality is likely possible though, by starting with a native resolution render and using DLSS to generate additional samples. This article shows that the scene is being rendered here at half-resolution and DLSS is used to fill in missing pixels to recoup some of the lost quality, but it can undoubtedly be applied to a full-resolution render to improve quality as well.
 

bit_user

Splendid
Ambassador

I was referring what the article said about not having such an option. After seeing the link you posted, it does sound like they might not rule out the case where input resolution == output.
 

Randy_82

Prominent
Mar 6, 2017
4
0
510
0
4K no AA is the way to go, if there's no performance issues. The only time you would want AA on 4K, if you're taking screenshots and need a flatten clean edge-to-edge image. However, I can see where DLSS is a game changer for us that own 4K monitors and that's performance on games which can't quite hit the 60fps. Trying to play 2560x1440 on a native 4K monitor is way worst than what DLSS can offer and no form of AA can save it from displaying a blurry mess. On my LG 43UD79-B 43" 4K display, I tried native 1440p with no AA, image quality is ugly as f and too distracting with the shimmering foliage all over the place to play. I tried 1440p with TAA, I think the game made me blinded, as image quality is too blurry to enjoy.
 
Dec 12, 2018
1
0
10
0
Can someone explain what's going on in the last picture and why the background is SO much better compared to no-AA? Even the cliff texture.
 

ASK THE COMMUNITY

TRENDING THREADS