News Jensen says DLSS 4 "predicts the future" to increase framerates without introducing latency

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Jensen is nothign kore than an AI snake oil salesman these days. He's a mercenary slowly destroying the gaming industry with his BS. I hope the AI bubble bursts so badly as to reduce Huang's personal wealth 99.99%.

AI BS aside, I eagerly await proper independent benchmarks against RTX 4000 cards with no DLSS, DLSS (upscale) only, RT only and RT + DLSS (upscale) only. If 5070 TI shows real gains in these cases over 4070 TI and is basically 4080 Super in raster and a bit stronger in RT it'll probably be my next card unless AMD can shock us with the 9070XT.
Maybe its snake oil, but still selling like hot cakes.
The bubble will burst but still, it means tons of money for Nvidia.
 
"Neural textures"... Remember the FPS game that was less than 1MB by using procedurally-generated textures?
 
For those familiar with VR: This is Asynchronous Spacewarp (in use for nearly a decade), but as well as just using motion inputs (for VR it's head motion from the IMU, for flat games it's mouse motion and cursor inputs) and optical flow to perform simple warping and reprojection, DLSS is used for fill in parallax gaps too.

Really should have been implemented in the first place rather than interpolated framegen, but at least it's here now.
 
I’m on 4K and have a water cooled overclocked 666W 4090 and I use both DLSS and also Frame Gen when the implementation isn’t broken in combination with DLDSR. It’s an amazing tech and I look forward to seeing how this new version works. DLDSR + DLSS Quality provide superior detail and anti-aliasing vs. Native rendering.
Yes, people are just being mad and awkward for the sake of it.

I'm, personally quite excited they make DLSS better and better, especially given some of the goods introduced get to be added to 3080Ti I use for now.

Other than that, it's not a zero-sum game either. It's not like people HAVE to use 4x Frame Gen, but it's great the option is there in hardware to do so.

I hope they will introduce better 2x Frame Gen quality down the road - because that seems to be the biggest bang for the buck as far as performance increase goes with the least drawbacks.
 
  • Like
Reactions: evdjj3j
I dont like "virtual" performance .. DLSS is a virtual performance.

more over , saying to people that 5070 is the same speed of 4090 is an utter lie.

I want "Raw" performance and not DLSS fake frames. no matter how close to reality they are.
 
I think some seem to forget that frame gen is not a bad thing. Its easy to sit in your pc chair and rant "I want pure rasterization". If we constantly bash new tech we would never advance.

Like a few have pointed out, we are hitting a ceiling on node shrink, and at a faster pace than ever. Look at Intel, I know they have made some blunders, but the cpu market is having the same issues. AMD has some issues as well but they are just catching up.

We are in an age where software is going to be more important than hardware. Game devs need to catchup for this to all play out well, a lot of these games are poorly optimized, but I do see a sunrise. It is getting better and better every generation.

Lets all wait until our go to guys, Jarred, Steve and....Steve get to some benchmarking. Technology and software is evolving, we have to evolve with it.
 
Jensen used a lot of words to attempt to justify not using at least 16gb VRAM on everything above the 5060 and why it's not a bad thing there is very little on paper (and in practice perhaps) difference, outside of the Titan class 5090, for the 5000 series over the 4000 series.

Perhaps their 5% stock drop today is a result of that as well.
Lmao! Jensen is acting like a total nut. The money has turned him into a lizard man.
 
Honestly, going on the only specs nVidia really revealed on their slides across all the announced models, the best value for the money is the 5070 Ti. This is based on TOPS per dollar.

Code:
   5090: 3400TOPS/$2000 = 1.7TOPs/dollar
   5080: 1800TOPS/$1000 = 1.8TOPs/dollar
5070 Ti: 1400TOPS/$750  = 1.86TOPs/dollar
   5070: 1000TOPS/$550  = 1.82TOPs/dollar
 
Honestly, going on the only specs nVidia really revealed on their slides across all the announced models, the best value for the money is the 5070 Ti. This is based on TOPS per dollar.

Code:
   5090: 3400TOPS/$2000 = 1.7TOPs/dollar
   5080: 1800TOPS/$1000 = 1.8TOPs/dollar
5070 Ti: 1400TOPS/$750  = 1.86TOPs/dollar
   5070: 1000TOPS/$550  = 1.82TOPs/dollar
I think the 5070 and 5070Ti are the best options, but we also have to wait and see what type of markup the AIBs impose, we all know you aren't buying any of these cards at MSRP.
 
  • Like
Reactions: bit_user and Jame5
And this is where all the problems come from. UE5 has all the features pre baked. Game devs get lazy and do a poor job optimising the game. I would rather have game devs using their own engines instead...
You haven't perchance developed any games with their own engines yourself recently, have you?

Perhaps you're then not in the best position to label others as "lazy".

I for myself have so far done little more than just load things like the Unreal demos for UE4 and UE5 into the editor as well as the ARK dev kits to get a feel on just how difficult it would be to plug my own bits of code into those dinosaur brains (characters within ARK, not the gamer developers).

And it's nightmarishly complex stuff you have to master before you take the first little step: just give it a try, the downloads are free and only a few hundred gigabytes!

And it's one of those moments, where my huge workstations paid off a little bit, because the initial load of a project like that took less than ten minutes for all those shaders to compile.
 
  • Like
Reactions: bit_user
I’m on 4K and have a water cooled overclocked 666W 4090 and I use both DLSS and also Frame Gen when the implementation isn’t broken in combination with DLDSR. It’s an amazing tech and I look forward to seeing how this new version works. DLDSR + DLSS Quality provide superior detail and anti-aliasing vs. Native rendering.
You need to watch that video someone posted earlier from Digital Foundry, DLSS 4 is looking great.
 
You haven't perchance developed any games with their own engines yourself recently, have you?

Perhaps you're then not in the best position to label others as "lazy".

I for myself have so far done little more than just load things like the Unreal demos for UE4 and UE5 into the editor as well as the ARK dev kits to get a feel on just how difficult it would be to plug my own bits of code into those dinosaur brains (characters within ARK, not the gamer developers).

And it's nightmarishly complex stuff you have to master before you take the first little step: just give it a try, the downloads are free and only a few hundred gigabytes!

And it's one of those moments, where my huge workstations paid off a little bit, because the initial load of a project like that took less than ten minutes for all those shaders to compile.

Ok, let me give you more context:

maybe I was a bit rude to broadly classify all UE 5 devs as lazy.

But its true that UE5 provides all the tech features needed and thats why game companies are dropping their own engines.

Its is true that you have a lot of stuff to work on and fine tune. Now imagine doing it in an inferior/less sophisticated game engine but still making the game look breathtaking. Now you understand the context a bit better i guess?

and have you seen the majority of the latest games? do you want some examples?

Batman Arkham Shadows
Star wars outlaws
CP 2077 when it first launched
Even Elden Ring - good game but not groundbreaking graphics.
Skull & Bones

Except for the polished version of CP 2077, Wukong and a few others, you really cant say game graphic peaked in 2024. But they again require beefy GPUs to run at max settings.

See RD2, AC blackflags, BF1, COD AF, NFS Rivals and you can see how great the graphics were and more importantly how great they ran and mid tier GPUs as well. Even Witcher 3 with its next gen texture pack is gorgeous in today's standards.

Do you still feel gamers shouldnt/cant complain about unoptimised games? I feel that I am justified in voicing this opinion.
 
  • Like
Reactions: abufrejoval
I’m on 4K and have a water cooled overclocked 666W 4090 and I use both DLSS and also Frame Gen when the implementation isn’t broken in combination with DLDSR. It’s an amazing tech and I look forward to seeing how this new version works. DLDSR + DLSS Quality provide superior detail and anti-aliasing vs. Native rendering.

"when the implementation isn’t broken"

In my experience the implementation is broken more often then not

Lots of qualitative comparisons across sites. Would like to see some kind of quantitative measure of generated frames compared to rendered frames. Can any software grab and separate rendered vs. generated frames?

I use my eyes. 9 times out of 10 you can clearly see the difference with frame gen on and with it off. at least you can in the games i play.

I use it all the time. Of course I do still use an RTX 2060 (6GB), and the only reason I have been able to hang on to a 6 year old GPU and still get playable framerates for the latest games is because of DLSS.

I had a 2070 super, believe me, my poor opinion of DLSS and Ray Tracing is largely affected by my experience with that card. that said i've tried it on my brothers 4090, and it hasn't improved much imho.
 
  • Like
Reactions: snemarch
I think predictive frame gen will have the same basic limitation as interpolated frame gen - it works best when your base frame rate is already pretty high. That's when the differences between consecutive frames get small enough that framegen can be pretty accurate. The errors it does make will be smaller and less noticeable, due to the high number of real frames and the shorter amount of time they're on-screen. Unfortunately, it means framegen will remain fairly limited, as a crutch for dealing with truly low-performance scenarios, which is where it would be most valuable.

What I'd rather see is DLSS framegen using partially-rendered (i.e. unshaded) frames to generate predicted pixels, but the model should be able to generate feedback to the shading engine and tell it which pixels it couldn't predict with high confidence. Then, the shading engine could devote resources to computing just those unpredictable pixels.

I'll bet this is something they're working on, for a future iteration of the tech. It might eventually mean that there are no longer any frames which are purely framegen vs. rendered, but that every frame is a mix.
 
For those familiar with VR: This is Asynchronous Spacewarp (in use for nearly a decade), but as well as just using motion inputs (for VR it's head motion from the IMU, for flat games it's mouse motion and cursor inputs) and optical flow to perform simple warping and reprojection, DLSS is used for fill in parallax gaps too.
DLSS is more than simple ASW. It uses analytic motion vectors, which were first pioneered for TAA (Temporal Anti-Aliasing). Optical flow just is used to help with shadows and specular highlights, where the analytic motion vectors tend to err the most. The AI model selects when to use which motion vectors (or neither), based on its training data.
 
Complete BS, "does not increase latency, just the appearance of it. So if you get one frame, then 3 copies of the same frame, then it has 75% chance of being wrong. If this number goes any further up it will become a slide show.
 
Complete BS, "does not increase latency, just the appearance of it. So if you get one frame, then 3 copies of the same frame, then it has 75% chance of being wrong.
Where I think you have a point (and I was thinking this as well), is that the information you really need is when something surprising and not predictable occurs, like an enemy starts to emerge from around a corner. This is not something DLSS can predict - it would need a complex interaction with the game engine, like I mentioned 4 posts ago.

If you just want smoother motion, the visual experience of some games might indeed benefit (esp. those with low d^2s/dt^2). Higher framerates in such games still provide value in the form of reduced blur and improve eye tracking of fast-moving objects.
 
Last edited:
  • Like
Reactions: Roland Of Gilead
DLSS is more than simple ASW. It uses analytic motion vectors, which were first pioneered for TAA (Temporal Anti-Aliasing). Optical flow just is used to help with shadows and specular highlights, where the analytic motion vectors tend to err the most. The AI model selects when to use which motion vectors (or neither), based on its training data.
Optical flow for ASW is used for the full frame, just as with TAA and DLSS.

The concept of producing synchronous frame output from asynchronous frame input via consistent frame synthesis is very much based on the work done in the VR world. Whether the frame synthesis has a NN somewhere in the pipeline is not a fundamental change to functionality.
For example, this demo from a few years ago of decoupling render rate from output rate whilst retaining viewport update rate, demonstrating synthesising an arbitrary number of frames (even more than the DLSS 4 3x frames) without the aid of NNs at all. It also demonstrates why "but more latency!" is not strictly correct for frame synthesis via reprojection.
 
Optical flow for ASW is used for the full frame, just as with TAA and DLSS.
TAA doesn't use optical flow and neither did DLSS until Ampere GPUs added a hardware optical flow engine. The motion vectors used by TAA and DLSS2 were analytical. What makes it possible is that you know the screen space texture coordinates of each object, so you can compute the correct motion vector, whereas optical flow is merely a guess that's based on visual similarity and can easily get confused.

The reason they added optical flow to the mix was to deal with hard lighting boundaries, which usually don't follow what object surface textures are doing. The combination of both techniques gives you the best of both worlds.

The concept of producing synchronous frame output from asynchronous frame input via consistent frame synthesis is very much based on the work done in the VR world.
I didn't say it wasn't. I just said it's more than simple ASW.

Whether the frame synthesis has a NN somewhere in the pipeline is not a fundamental change to functionality.
For example, this demo from a few years ago of decoupling render rate from output rate whilst retaining viewport update rate, demonstrating synthesising an arbitrary number of frames (even more than the DLSS 4 3x frames) without the aid of NNs at all. It also demonstrates why "but more latency!" is not strictly correct for frame synthesis via reprojection.
Some of these VR tricks don't attempt to estimate the world state at a new time point, but merely compensate for head movement. If you're not trying to consistently interpolate or extrapolate the entire world, then it becomes a much simpler problem and you don't need any object motion vectors nor deep learning to provide heuristics about how to deal with the myriad corner cases.

The VR techniques are primarily about preventing motion sickness. For that, all you need to do is compensate for any head movement that has occurred between when the frame was rendered and when it's being displayed. It doesn't matter if objects are a couple ms out of place, as that's not directly noticeable and won't make you sick.
 
Where I think you have a point (and I was thinking this as well), is that the information you really need is when something surprising and not predictable occurs, like an enemy starts to emerge from around a corner. This is not something DLSS can predict - it would need a complex interaction with the game engine, like I mentioned 4 posts ago.

If you just want smoother motion, the visual experience of some games might indeed benefit (esp. those with low d^2s/dt^2). Higher framerates in such games still provide value in the form of reduced blur and improve eye tracking of fast-moving objects.
If they expand the motion vectors to "predict the future" of how other players are moving around you, imagine what someone scrapping the memory for a hack can do with it. Because vector information will always be local and you can't validate it server-side unless you track vector and not point-to-point positioning.

Still, it would radically change how you do tick-rates validations server-side for catching hackers, I'd imagine.

Regards.
 
You need to watch that video someone posted earlier from Digital Foundry, DLSS 4 is looking great.
Yup. My only concern right now is that each additional step in frame generation adds another 5% latency. So FG x4 has 10% higher latency than the current FG. Which...is good as far as additional frames goes...but actually makes the input latency part even worse than before.

Frame warp would compensate for that and then some. But I'm not sure it's compatible with frame generation. Will have to wait and see.