News Nvidia DLSS 3.5 Tested: AI-Powered Graphics Leaves Competitors Behind

Status
Not open for further replies.

PEnns

Reputable
Apr 25, 2020
644
679
5,770
I guess the term "Apples to apples..." is totally lost on Jared!

And, since you mentioned the Steam survey: It shows that, overwhelmingly, gamers don't give a rat's behind about Nvidias' latest and VERY expensive toys with their useless bells and whistles!

And the assault here on AMD continues: For the second day in a row, those articles are literally a glossy Nvidia PR presentation.
 

Freestyle80

Reputable
Aug 11, 2020
37
11
4,535
I guess the term "Apples to apples..." is totally lost on Jared!

And, since you mentioned the Steam survey: It shows that, overwhelmingly, gamers don't give a rat's behind about Nvidias' latest and VERY expensive toys with their useless bells and whistles!

And the assault here on AMD continues: For the second day in a row, those articles are literally a glossy Nvidia PR presentation.
Reeeee non-AMD tech reeeeeee how dare people use it

Worship AMD like me
 
Thanks a lot for the analysis Jarred, but could you please do a deeper dive on image quality by comparing different levels of details?

I have to say, after reading TPU's analysis I'm baffled at how bad RT and PT are in some scenes compared to Low settings, where it looks (subjectively, yes) better in some key aspects; ironically one of them being reflection on surfaces!

I'll leave it here for peeps to make their own mind, but I have to say I was blown away, in a bad way, at how bad the scenes looked with it. Maybe in motion is not as terrible, but dang...


Regards.
 
  • Like
Reactions: P1nky and atomicWAR

Order 66

Grand Moff
Apr 13, 2023
2,158
903
2,570
I will consider using this when it eventually comes to AMD gpus in ~10 years. I hate nvidia's prohibitive pricing for the 40 series and the fact that they not only have proprietary tech, but also said proprietary tech (DLSS 3 FG) is exclusive to the 40 series. Long live open standards. I know FSR isn't the best but at least it's not proprietary. I may switch to Nvidia depending on how the 50 series turns out.
 

bit_user

Polypheme
Ambassador
@JarredWaltonGPU , I fully agree with your take. I also share your trepidation around highly-proprietary rendering technologies.

IMO, it's not as if there can be no alternative. What I'd like to see is an open source 3D engine independently implement something conceptually similar to Ray Reconstruction that can (theoretically, at least) run on any hardware - although it will probably require some kind of deep learning accelerator, in practice (which are found in both Alchemist and RDNA3).

I assume Nvidia has built a patent wall around this family of techniques, but perhaps there's enough room for innovation that someone can find a way around it.
 
Thanks a lot for the analysis Jarred, but could you please do a deeper dive on image quality by comparing different levels of details?

I have to say, after reading TPU's analysis I'm baffled at how bad RT and PT are in some scenes compared to Low settings, where it looks (subjectively, yes) better in some key aspects; ironically one of them being reflection on surfaces!

I'll leave it here for peeps to make their own mind, but I have to say I was blown away, in a bad way, at how bad the scenes looked with it. Maybe in motion is not as terrible, but dang...


Regards.
It's a bit weird that there are no Ray Reconstruction tests. Unless I totally screwed up the embargo, which I admit is possible. But no...

"September 20th, 5pm CEST, 4pm BST, 8am PDT, 11am EDT
Review embargo for Cyberpunk 2077: Phantom Liberty is lifted.
You may now publish written or video impressions including:
Assets (screenshots and b-rolls) provided by CDPR
Self-captured screenshots"

I was a day late, in other words, while TPU provided a bunch of information but nothing on RR, which is honestly the biggest change to the graphics stuff. But then TPU also did all testing at native, no upscaling, which precludes using RR.

Games and how they look are an interesting category of graphics. Sometimes, totally incorrect lighting can be perceived as "better" — look at the low/medium screenshots on TPU. For people used to the way rasterization looks, some scenes might be deemed more atmospheric. Add in path tracing, or even partial ray tracing, and some of those scenes end up being a lot brighter and more lit up.

The scene in the room, looking at the window, is a great example. With path tracing, the blinds are glowing. I'm not sure if that's "correct" based on where the sun would be, but the blinds are mostly horizontal so probably it is. But instead of a dark room with most of the light coming off the desk (plus some weird aspects like the plastic sheet to the right in most non-PT renderings), you get a completely different result. The guy near the window now looks dark because there's a bright window behind him, and the wall around the window seems lit up maybe more than it should be? But maybe not, because light does some funny things.

So-called path tracing completely reworks all of the lighting effects. But also, RT Overdrive without Ray Reconstruction still misses some things. I'd really like to see that scene with RR versus vanilla RTO to see how it changes. My general impressions of RR are that it looks universally better than the base RTO mode and adds a lot to the way the game looks. And as discussed extensively in this piece, the fact that it's Nvidia-only tech definitely has some worrying implications. But this is the most innovative take on full ray tracing that we've seen, for sure.
 
It's a bit weird that there are no Ray Reconstruction tests. Unless I totally screwed up the embargo, which I admit is possible. But no...

"September 20th, 5pm CEST, 4pm BST, 8am PDT, 11am EDT
Review embargo for Cyberpunk 2077: Phantom Liberty is lifted.
You may now publish written or video impressions including:
Assets (screenshots and b-rolls) provided by CDPR
Self-captured screenshots"

I was a day late, in other words, while TPU provided a bunch of information but nothing on RR, which is honestly the biggest change to the graphics stuff. But then TPU also did all testing at native, no upscaling, which precludes using RR.

Games and how they look are an interesting category of graphics. Sometimes, totally incorrect lighting can be perceived as "better" — look at the low/medium screenshots on TPU. For people used to the way rasterization looks, some scenes might be deemed more atmospheric. Add in path tracing, or even partial ray tracing, and some of those scenes end up being a lot brighter and more lit up.

The scene in the room, looking at the window, is a great example. With path tracing, the blinds are glowing. I'm not sure if that's "correct" based on where the sun would be, but the blinds are flat so probably it is. But instead of a dark room with most of the light coming off the desk (plus some weird aspects like the plastic sheet to the right in most non-PT renderings), you get a completely different result. The guy near the window now looks dark because there's a bright window behind him, and the wall around the window seems lit up maybe more than it should be? But maybe not, because light does some funny things.

So-called path tracing completely reworks all of the lighting effects. But also, RT Overdrive without Ray Reconstruction still misses some things. I'd really like to see that scene with RR versus vanilla RTO to see how it changes. My general impressions of RR are that it looks universally better than the base RTO mode and adds a lot to the way the game looks. And as discussed extensively in this piece, the fact that it's Nvidia-only tech definitely has some worrying implications. But this is the most innovative take on full ray tracing that we've seen, for sure.
Hence my request :)

Regards.
 
  • Like
Reactions: ae.arab2
@JarredWaltonGPU , I fully agree with your take. I also share your trepidation around highly-proprietary rendering technologies.

IMO, it's not as if there can be no alternative. What I'd like to see is an open source 3D engine independently implement something conceptually similar to Ray Reconstruction that can (theoretically, at least) run on any hardware - although it will probably require some kind of deep learning accelerator, in practice (which are found in both Alchemist and RDNA3).

I assume Nvidia has built a patent wall around this family of techniques, but perhaps there's enough room for innovation that someone can find a way around it.
Yeah, I've wondered about whether DirectML just isn't robust enough, or what the holdup is. I mean, in the AI space, there's so much stuff built around PyTorch and Nvidia tech. Now there's a port of Automatic1111 Stable Diffusion to DirectML, but even that isn't really as straightforward as you'd think. Like, I've been going back and forth with AMD trying to figure out how to make their instructions work with a standardized testing approach.


That's supposed to get Olive and ONNX running with AMD GPUs. And it works... sort of. Here's what I've passed along to AMD:

----------------
First, runwayml/stable-diffusion-v1-5 works for a single image generation. However, if I try to change batch size or batch count, it fails with an error. The Olive models get optimized for 512x512 images, so doing two or more images at a time breaks that. But batch count just seems like a bug in the code. It should repeat the generation X number of times. (I'm trying to do 24 total images, which allows me to do either 24 x 1, 12 x 2, 8 x 3, 6 x 4, 4 x 6, or 3 x 8 to optimize GPU utilization.) With AMD right now, that would mean 24 x 1 is my only option, and that likely isn't optimal, but I had to do the same with Nod.ai.

Second, I use v2-1_768-ema-pruned.safetensors normally, which should just be in "stabilityai/stable-diffusion-2-1" — instead of "runwayml/stable-diffusion-v1-5" or "stabilityai/stable-diffusion-xl-base-1.0". But it doesn't seem to work right. I get brown images for 512x512 or 768x768. Also, I normally do 512x512 and 768x768 using the same model, but the Olive/ONNX stuff doesn't seem to allow that?

Finally, while individual 512x512 image generation times seem to be better than what I got with Nod.ai, any attempt at 768x768 images has been questionable at best. I can do 768x768 image generation (with a warning that it's using a different size than what the model was trained on), but the time to generate a single image on the 7700 XT was 36–37 seconds. That's 1.62–1.67 images per minute. Nod.ai got 2.81 images per minute on the 7700 XT — using SDv2-1. For Nod.ai, I got 8.71 images per minute at 512x512, while A1111 with Olive/ONNX gives 10.53 images per minute. But again, it might be apples and oranges since I'm doing different model versions and only a single image rather than a batch of 24.
----------------

The point is that, while this is all supposed to be DirectML-based, all sorts of stuff keeps breaking in various ways. There's a bunch of stuff basically being done to tune AMD GPU performance for very specific workloads right now. In theory, if all the bugs get sorted out, maybe that no longer happens and we could have DirectML working just as well as ROCm and CUDA. In practice? Yeah, I'll believe it when I see it. (It took Intel about nine months to get their OpenVINO variant of Automatic1111 working more or less properly across different models and output settings.)

So while there's supposedly universal frameworks for doing deep learning and such, we're absolutely missing the standards and drivers and whatever else to make them usable, especially in real-time gaming. Because DLSS as an example probably has to run the whole sequence in a matter of milliseconds on each frame. If I had to guess, we're years away from DirectML being usable for that sort of work. Even Intel, with basically no market share, did its own completely proprietary implementation of upscaling with XeSS, presumably because there wasn't a good API to build around.
 

elforeign

Distinguished
Oct 11, 2009
97
130
18,720
Thanks, Jared, for a great write up! As I wrote yesterday, the techniques being used to propel forward real-time rendering are absolutely fantastic and what is troubling is the ecosystem firewall behind the innovation.

I hope there can be a consortium of these companies that see a need to define a standard so consumers can reap the benefits of innovation. There should be a way for them all to make money without breaking up the user experience.
 
Last edited:
D

Deleted member 2950210

Guest
As much as i love what they did with DLSS and RTX 4090, i'm not an Nvidia fan in any way. They have no rival and they can charge what they want. Without some healthy competition from both AMD and Intel, we are getting pretty close (if not there already) to a non-competitive GPU market - and that's just bad for every PC gamer out there.
 
All I ask for is the same treatment when FSR3 releases. An in depth article of all the upscaling tech available with and without RT.
Just remember: I don't really like Frame Generation, aka DLSS 3. The inclusion of Reflex support is good, because it's needed, but my experience with AMD Anti-Lag has not been as positive. (Recently, for the RX 7800/7700 XT reviews, I had Anti-Lag enabled and had a least one game — Watch Dogs Legion — where it clearly screwed up performance, badly. Turning off Anti-Lag fixed it.)

Anyway, best-case is that FSR3 can match DLSS 3 for Frame Generation. But it will still add latency and require additional processing time, which means the real benefits in terms of the feel of games will likely be worse than running without FSR3. Just like DLSS 3.
 
Hence my request :)

Regards.
Any idea where in the game that bottom screenshot (with the blinds) was taken? I've poked around so that I could capture something similar, not having any luck sadly.

But I will say, playing the game more, the "OMG I just went from a dark room to the bright outdoors" effect feels very overdone. Maybe realistic, and games have done that without ray tracing as well. But there are some oddities from the effect at times.
 
  • Like
Reactions: Order 66
But I will say, playing the game more, the "OMG I just went from a dark room to the bright outdoors" effect feels very overdone. Maybe realistic, and games have done that without ray tracing as well. But there are some oddities from the effect at times.
I don't think I've ever found a game that does it just right. Though anything's better than how Half-Life 2 did it.

If anything the only thing that really bugs me now in non-RT games is SSR with no fall back reflection. A simple cube map would be fine, but a lot of games these days don't even bother.
 
  • Like
Reactions: Order 66

Co BIY

Splendid
And as discussed extensively in this piece, the fact that it's Nvidia-only tech definitely has some worrying implications. But this is the most innovative take on full ray tracing that we've seen, for sure.

With 82% of the market place and large tech leads to Nvidia They set the standard and it is very difficult and probably foolish for others to fight. The damage caused by single actor market domination is already locked in.
 
Just remember: I don't really like Frame Generation, aka DLSS 3. The inclusion of Reflex support is good, because it's needed, but my experience with AMD Anti-Lag has not been as positive. (Recently, for the RX 7800/7700 XT reviews, I had Anti-Lag enabled and had a least one game — Watch Dogs Legion — where it clearly screwed up performance, badly. Turning off Anti-Lag fixed it.)

Anyway, best-case is that FSR3 can match DLSS 3 for Frame Generation. But it will still add latency and require additional processing time, which means the real benefits in terms of the feel of games will likely be worse than running without FSR3. Just like DLSS 3.

Be as objective as possible! and you do a good job at it, no doubt! But i clearly see more marketing for nvidia and intel topics and very muted coverage for amd tech/announcements - article word count or no. of articles published for a launch announcement.

Comparing DLSS which needs to be supported by the game vs FSR 3 which is driver level and universal is not apples to apples. the approach by nvidia and AMD is not the same. everyone expects FSR 3 lauch to have a lot of bugs, no one here is complaining that reviewers complain abt AMD launching their product with bugs. Just dont understate the news. for example, can we expect a separate review article on AMD anti lag as of now? just like this dlss 3.5 article?

Would you revisit Anti lag and FSR 3 after a few months and review it again after driver updates? I would call that being neutral and being objective.

And the sentiment is shared by many readers and long time members here.

Just pointing out what i see and hoping for a more neutral coverage.
 
Last edited:

Metteec

Distinguished
Jan 12, 2014
42
31
18,560
The charts with the higher FPS seem impressive for RT, Overdrive, etc. with DLSS 3.5, but I feel either stupid, blind or both. I cannot tell the difference between the different render screenshots with RT, Overdrive, etc. Can the community help point out some of the key differences in the photos that demonstrate the difference(s) in the screenshots?
 
They should make an accelerator card for ray tracing like the old physx cards and problem solved.

the RT card might cost as much 200 USD to 600 USD depending on the no. of tensor cores. and ppl would not pay that much for RT eye candy. to make it sell, bundling the RT cores with the gpu was the correct choice.

but considering the price of 4090, having a cheaper GPU and RT card separate makes sense. especially if the tensor core does the upscaling and RT. Driver level support for RT and upscaling is the way to go!
 
  • Like
Reactions: JarredWaltonGPU
Status
Not open for further replies.