News Death Stranding with DLSS 2.0 allows for 4K and 60 fps on any RTX GPU

hannibal

Distinguished
Well dlss 4k is not 4k but up-scaled, But if done right it does not look too bad anymore!
So more interested in seen if it has been done so that it does not look bad or...
 
A few other thoughts about the game:

1) It's super weird. I've played perhaps a dozen hours of it now, and it's either going to end up being some sort of weird allegory for the afterlife (like Kentucky Route Zero), or it's going to be complete nonsense mystical mumbo jumbo. I'm guessing the latter. I would not call it a masterpiece -- weird for the sake of being weird, more like, with lots of walking around picking up packages in between. The whole BB thing (Bridge Baby, an 28 week old baby in a bottle on your chest that detects BTs) for example. Why? Whatever.

2) The United Cities of America premise is also goofy, as the landscapes look nothing like the US. Also, you walk about four km in the first two chapters of the game, connecting people up with likes to the Chiral network. Except, the map shows that you've apparently covered most of the eastern US. I get not making the game to scale ... but it's ludicrous to set it in "the former US" where the entire country is apparently only 10-12 km across.

3) The corporate tie ins are nuts, along with all the "here's the real actor playing this particular character" blurbs. Do we really need to know who plays Deadman, or Die-Harder? It's constantly breaking the wall. Monster Energy drinks, Hollywood B-list celebrities, Half-Life stuff, and I don't know what else.

4) Constant unskippable cutscenes suck. They suck SO BAD! The first time you encounter a BT, you're treated to BB freaking out for 15 seconds alerting you to their presence. You can have the game do this EVERY time if you're a masochist. The elevators, private quarters, etc. do have the option to skip forward, but there's a ton of skipping required. Every successful delivery requires about 30 seconds of clicking through the same old nonsense to find out how many 'likes' you got. Just ... ugh. Clearly not for me.

I'm sure some people will love it. I'm personally curious to see where it goes. But for all the fawning over Hideo Kojima, this game has some clear flaws. It looks pretty, and it's not bad, but if you stripped out a bunch of garbage cutscenes that repeat so often it's maddening, you could make this a 20 hour game instead of a 40-50 hour game.

Also, I accidentally drove my first reverse trike into deep water. (Rivers have blue, yellow, and red indicators for how deep the water is, but I crossed at a point where it was red not realizing this.) I apparently am now stuck on foot until episode 3 in the game, which sucks. You get a trike to use as much as you want ... unless you break it and then it's gone until you progress. Can't you just put a new trike back in the garage? Or recover it? Bleh.
 
  • Like
Reactions: bit_user

CatalyticDragon

Honorable
Jun 30, 2015
19
5
10,515
The performance mode looks just a tad worse, though at higher resolutions it's far less noticeable

By definition if it's using upscaling then it's not 4K. Naturally the closer you get to native resolution the better the image quality will be so it'll be good to see those side by side comparisons.
 
I gather it's not quite as divisive as The last of Us 2, but is this game actually any good on a mechanical level? Is it "fun"? Does it have a good story? I read somewhere the actual good part of the game starts about 20 hours in or something(?).
 

bit_user

Polypheme
Ambassador
By definition if it's using upscaling then it's not 4K. Naturally the closer you get to native resolution the better the image quality will be so it'll be good to see those side by side comparisons.

Computer graphics is all about illusions, duh. If the eyes perceive it as 4K then it is 4K.
Interesting debate. I think you each have a point.

The fundamental problem with scaling is that it doesn't truly add information. So, there will be fine details that are missing from even the best upscaling. DLSS seems to do a lot to minimize the downsides of scaling, but it certainly can't match native rendering.

The point about the eyes perceiving 4k seems a little off, because eyes don't really work like that. Rather, it's just a variation on the argument of "good enough". And that's the point - if it's good enough that it offers a visual improvement over gaming at native 1440p, for instance, then maybe you just be happy and play the game. There's a lot of wisdom in that, but what's good enough for one player might not suffice for another.

Maybe someone is sitting right in front of a 55" display, where they can really see the finer details. I wouldn't try to tell that person that they should be as happy with upscaling as someone playing on a 28" 4k monitor. There are other factors, but display hardware is probably the biggest one.
 
Jul 5, 2020
1
0
10
By definition if it's using upscaling then it's not 4K. Naturally the closer you get to native resolution the better the image quality will be so it'll be good to see those side by side comparisons.
Not exactly, the difference between 1080p-4k is much bigger than 720p-1080p or 540p-1080p, if it's the same with Control the DLSS Performance should be 1080p-4k and 1440p-4k for DLSS Quality.
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
The fundamental problem with scaling is that it doesn't truly add information. So, there will be fine details that are missing from even the best upscaling. DLSS seems to do a lot to minimize the downsides of scaling, but it certainly can't match native rendering.

There's a video on Youtube showcasing Control with DLSS 2.0. The screenshot for 540p with DLSS actually seems to have more details than the one for 1080p native.

AI is basically pattern recognition. When the computer recognizes that a bunch of pixels are supposed to represent a particular shape, it can scale the image to any resolution. In a way it's like OCRing a scanned document, where the output would have less raw information but yield superior results when printed.
 

bit_user

Polypheme
Ambassador
There's a video on Youtube showcasing Control with DLSS 2.0. The screenshot for 540p with DLSS actually seems to have more details than the one for 1080p native.
It depends on what you mean by details. So, making an edge more crisp isn't something I consider to add real information. However, fine texture, discontinuities, etc. are the sorts of things you lose by rendering to a lower target and upscaling.

AI is basically pattern recognition. When the computer recognizes that a bunch of pixels are supposed to represent a particular shape, it can scale the image to any resolution.
Yeah, I get that. However, the sophistication and nature of details it can tease out are also limited by the size of the network. In order to run fast enough, the DLSS network can't be very large or sophisticated. I think lot of the benefit you're getting is from TAA, and DLSS 2.0 is, to a large extent, just cleaning it up.
 
It depends on what you mean by details. So, making an edge more crisp isn't something I consider to add real information. However, fine texture, discontinuities, etc. are the sorts of things you lose by rendering to a lower target and upscaling.

Yeah, I get that. However, the sophistication and nature of details it can tease out are also limited by the size of the network. In order to run fast enough, the DLSS network can't be very large or sophisticated. I think lot of the benefit you're getting is from TAA, and DLSS 2.0 is, to a large extent, just cleaning it up.
Nvidia is being pretty cagey about what exactly DLSS 2.0 does, but it seems like it uses data from multiple frames now in the upscaling and de-aliasing process, combined with whatever special deep learning training sauce is going on. I can say my impression of DLSS 2.0 is very favorable, especially compared to things like TAA (way too blurry much of the time). Whether it's 'real' 4K or not largely becomes meaningless if it looks better.

Which is sort of a big problem. 4K native at max quality is hard to render at high fps, and if you can get a potentially better result with upscaling, it suggest the rendering algorithm itself needs to be improved. 4K with TAA often looks worse than 1440p with some alternative to TAA that's not so aggressive. Best thing about DLSS may be that it disables TAA. :)
 

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
Yeah, I get that. However, the sophistication and nature of details it can tease out are also limited by the size of the network. In order to run fast enough, the DLSS network can't be very large or sophisticated. I think lot of the benefit you're getting is from TAA, and DLSS 2.0 is, to a large extent, just cleaning it up.

The Tensor cores are pretty powerful. The low-end RTX 2060 is advertised at 52 TFLOPS. That's a lot of dedicated horsepower.

I would describe DLSS as a form of image-to-image synthesis as opposed to just upscaling. It'll interesting to see if Nvidia can extend the technique to raytracing. Feed the learning engine lots of 16K images rendered with high numbers of rays and see if the AI can reconstruct them from low-res, low-ray-count images.
 
  • Like
Reactions: JarredWaltonGPU

bit_user

Polypheme
Ambassador
The Tensor cores are pretty powerful. The low-end RTX 2060 is advertised at 52 TFLOPS. That's a lot of dedicated horsepower.
The Tensor cores aren't actually independent cores. They work more like AVX in a x86 CPU, in that the CUDA cores need to dispatch instructions and data to them, for every single operation. Also, they eat a share of memory bandwidth.

So, you can't get full performance out of them, while you're also doing other things on the CUDA cores. You have to look at the amount of time per frame that's spent on DLSS, and scale the Tensor performance by that fraction.

Also, I assume DLSS uses mostly integer layers, meaning you'll want to look at the TOPS ratings - not the TFLOPS.

For a deeper understanding of Nvidia's Tensor Cores, check out this article: https://www.anandtech.com/show/12673/titan-v-deep-learning-deep-dive
 
Last edited:

bit_user

Polypheme
Ambassador
Best thing about DLSS may be that it disables TAA. :)
I thought I'd read that it was built on TAA, but I think the point of confusion was that it's built on a fundamental building block of TAA: motion vectors.


I want to say I also read that it was only supported on games that also supported TAA (which, given the above point, would actally make sense), but maybe I'm imagining that.
 
I thought I'd read that it was built on TAA, but I think the point of confusion was that it's built on a fundamental building block of TAA: motion vectors.


I want to say I also read that it was only supported on games that also supported TAA (which, given the above point, would actally make sense), but maybe I'm imagining that.
It’s similar in some ways to TAA, but it does produce less blur than TAA in my experience. The difficulty with DLSS is that so much of what it does is based off machine learning where the exact details of what’s happening are obfuscated.

Nvidia feeds the training network 64-sample 4K reference images with 1080p images and basically asks the network to come up with a weighting that ‘answers’ the question of how to best interpolate from 1080p to 4K+AA. Like all machine learning algorithms, it can make mistakes or give less than perfect results, but with enough training it gets close enough. That’s the theory at least.