News Nvidia Reveals DLSS 3.5: AI-Powered Ray Reconstruction

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Is FG good or bad, then? That's the real question, and I'd put the answer somewhere in between. It's interesting, and can make games look smoother. But it never (in my experience) really makes them feel more responsive.

I'm all about use cases. This type of technology would be great for prerendered scenes running at low FPS, like a movie at the NTSC rate of 29.97 FPS, or some sort of long CGI animation or cut scene. Having all the material already rendered ahead of time means there won't be any timing issues and should be minimal artifact issues. A video game OTOH needs to be responsive to the users input and frames need to hit the screen pretty quickly which is why we avoid triple buffering, and this essentially requires it. As discussed it only really works at high FPS, where it's not needed, at low FPS it makes the experience even worse.

Worse is that it artificially inflates benchmarks by incrementing the FPS counter on those pseudo frames. How would you react if someone claimed a 50~100% "performance improvement" by turning on motion blur? This can lead people to make purchases they wouldn't of otherwise made.
 

bit_user

Polypheme
Ambassador
Worse is that it artificially inflates benchmarks by incrementing the FPS counter on those pseudo frames. How would you react if someone claimed a 50~100% "performance improvement" by turning on motion blur? This can lead people to make purchases they wouldn't of otherwise made.
If the frames look as good as rendered ones, then the distinction seems artificial. Again, I go back to my earlier point about VRS. Why no controversy about "fake pixels" being interpolated by VRS? Probably because it looks good enough - especially at higher resolutions.

The way I see it, the controversy over Nvidia's Frame Generation boils down to two real issues:
  1. Added latency.
  2. Artifacts.

#2 sudivides into: how bad they are and how often they occur. If noticeable artifacts are extremely rare, then I'd tend think of it like the motion smoother on my TV: I'm willing to overlook the artifacts, because the overall improvement is massive. However, if the artifacts tend to occur in times and places that really affect gameplay (and rapid camera-movement would seem to be one of these), then it's potentially a much bigger deal.

However, if the frames looked as good as fully-rendered ones, and the technique introduced no perceivable latency, I think nearly everyone would be using it - regardless of what they said in public.
 
If the frames look as good as rendered ones, then the distinction seems artificial. Again, I go back to my earlier point about VRS. Why no controversy about "fake pixels" being interpolated by VRS? Probably because it looks good enough - especially at higher resolutions.

The way I see it, the controversy over Nvidia's Frame Generation boils down to two real issues:
  1. Added latency.
  2. Artifacts.

#2 sudivides into: how bad they are and how often they occur. If noticeable artifacts are extremely rare, then I'd tend think of it like the motion smoother on my TV: I'm willing to overlook the artifacts, because the overall improvement is massive. However, if the artifacts tend to occur in times and places that really affect gameplay (and rapid camera-movement would seem to be one of these), then it's potentially a much bigger deal.

However, if the frames looked as good as fully-rendered ones, and the technique introduced no perceivable latency, I think nearly everyone would be using it - regardless of what they said in public.
VRS isn't quite the same. It just samples textures at higher/lower resolution, based on some math that suggests whether a change in sampling rate will be noticeable or not. If you turn on VRS and do screenshot comparisons, you can often detect the slight reduction in fidelity. Certain objects just look a bit soft / fuzzy. I generally leave VRS off, or at least most games don't have it enabled by default in my experience.

Frame Generation is really more like ATW (Asynchronous Time Warp), but with differences. The biggest is that ATW on an Oculus does actually warp the previously rendered frame based on new user input. So in effect, user input gets sampled more frequently to reduce latency while keeping the frames to display rate high (90 fps being the goal). So ATW actually is predictive and creates a new frame, while FG blends two already rendered frames.

I haven't done much with VR, so I'm not sure how visible ATW frames actually end up being. Part of that is because a lot of VR games for Oculus target higher fps and lower fidelity I think. They often end up being a lot more like Fortnite at medium to high quality (without ray tracing), where they'll hit 150 to 200 fps easily on even modest hardware.
 

bit_user

Polypheme
Ambassador
VRS isn't quite the same. It just samples textures at higher/lower resolution, based on some math that suggests whether a change in sampling rate will be noticeable or not. If you turn on VRS and do screenshot comparisons, you can often detect the slight reduction in fidelity. Certain objects just look a bit soft / fuzzy. I generally leave VRS off, or at least most games don't have it enabled by default in my experience.
I really meant to pose VRS in contrast to intraframe DLSS (i.e. just the scaling aspect). Because some people regard DLSS pixels as "fake", I was trying to point out how VRS pixels are similarly fake, yet because it doesn't apparently involve deep learning, it somehow escaped much controversy.

I haven't done much with VR, so I'm not sure how visible ATW frames actually end up being. Part of that is because a lot of VR games for Oculus target higher fps and lower fidelity I think. They often end up being a lot more like Fortnite at medium to high quality (without ray tracing), where they'll hit 150 to 200 fps easily on even modest hardware.
From what I've heard, VR can be extremely demanding, if you play some of the titles on a high-res HMD. Someone mentioned that you can easily stress even a RTX 4090, though I forget details like which game and which HMD. @cryoburner would probably know more.
 

kiniku

Distinguished
Mar 27, 2009
250
70
18,860
And the most important "feature" wasnt mentioned, to keep you locked into Ngreedia Hardware, but we know, nobody in the media dares speaking such truths...
I didn't realize Nvidia was holding you at gunpoint to purchase a GPU. The good news is, you have choices! Both AMD and Intel offer GPUs too. So rather than posting your personal rants and raves of how others should spend their own money, there's more news, they don't need your advice. In the meantime, enjoy your GPU. But just keep in mind which manufacturer has led GPU innovation for the last 15 years at least.
 

kiniku

Distinguished
Mar 27, 2009
250
70
18,860
DLSS is just another version of the HP ink cartridge bone thrown at its customers.

And amazing how quickly companies are throwing AI flavored fluff at us - in such a short period of time!!
Do the HP 3.0 cartridges do ink generation? NVidia created upscaling for consumer GPUs and put it on the map. With Radeon on the other hand, FSR is an afterthought to try to appear competitive. Which has been AMD's overall strategy for decades. Oh wait, AMD launched HMB memory for their GPUs. We were told that was going to be the cutting edge against NVidia GPUs back then! LOL
 

bit_user

Polypheme
Ambassador
an afterthought to try to appear competitive. Which has been AMD's overall strategy for decades.
That's not true. RDNA2 saw AMD introducing Infinity Cache, which Nvidia didn't counter until the Ada generation (RTX 4000). It's probably the main reason RDNA2 was so competitive against Ampere.

AMD launched HMB memory for their GPUs. We were told that was going to be the cutting edge against NVidia GPUs back then! LOL
HBM was indeed very effective at helping Fury counter the GTX 980 Ti. However, it was not enough to save Vega, partly because Nvidia countered with faster GDDR5 memory, tiled rendering, and better texture compression. In the end, HBM left the consumer domain because innovations, like the ones I mentioned and Infinity Cache made it unnecessary and it added expense.

AMD also introduced chiplets, in GPUs. They weren't enough to change the competitive balance, but we can't say where the RX 7000 would've been without them. They might've been a valuable learning experience, for AMD, and could set the stage for an even better implementation.
 
Status
Not open for further replies.