News Jensen says DLSS 4 "predicts the future" to increase framerates without introducing latency

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
In fairness, that's just because your eye can recognize 24fps and identifies media at different frame rates as feeling "wrong"/"off". Younger viewers don't have that same reaction; 60fps doesn't inherently "feel" less real, the the GG and the Boomers just weren't used to it and drew the wrong conclusions.
Well, I can buy the argument that the way someone acts for 24 fps is different than how they act for video. So, they don't like motion smoothing, because it basically transplants that performance into a medium it wasn't intended for.

However, I think high-framerate also does a better job in helping people see some nuances that distinguish good vs. bad acting, and maybe some of those who complain most vociferously are either bad actors or people who now see that their favorite actor isn't as believable as they once thought.
 
  • Like
Reactions: dipique
It's like you're discovering for the first time that it is, and has always been, trickery all the way down. One of our old tricks isn't scaling well anymore and we are using new tricks. But if you think that traditional rasterization isn't itself a pyramid of trickery, you should educate yourself.

This has always been the way of things. Only your ignorance and false expectations are to blame for it being a surprise to you.
Eh, it's IMHO not fair to make that comparison.

Traditional rendering has a lot of "tricks", but it's clever algorithms, optimization, approximations. It's reproducible and controllable, rather than the hallucinatory framegen and upscaling BS that can only regurgitate what it's already been fed, and visual styles it has been trained on.

It's a lazy cop-out, and if this is what the industry keeps betting on... ugh. Even if all the visual artifacts get solved, we'll end up with even blander and samesy-looking games than we've got now.

Rasterization-based rendering is already crazy fast, so perhaps it's OK if there's not very big gains to be had in that part... but perhaps there's some possible breakthroughs for raytracing – or something entirely different?

If we just accept the hallucinatory ML cop-out, innovation is dead.
 
"Neural textures"... Remember the FPS game that was less than 1MB by using procedurally-generated textures?
Are you thinking of the 96kb .kkrieger?

That generates all the textures up-front, so you still need a bunch of vram. If the "neural texture" stuff makes any sense, it would have to be able to do this realtime.

It's an interesting idea, but if it's the usual ML/LLM it can only regurgitate stuff it's been trained on, whereas with old-school procedural textures you can get anything you're able to write code for.
 
Well, I can buy the argument that the way someone acts for 24 fps is different than how they act for video. So, they don't like motion smoothing, because it basically transplants that performance into a medium it wasn't intended for.

However, I think high-framerate also does a better job in helping people see some nuances that distinguish good vs. bad acting, and maybe some of those who complain most vociferously are either bad actors or people who now see that their favorite actor isn't as believable as they once thought.
Somehow I think a lot of that is actually more to do with the limitations of camera's that record something that already exists vs the limitations of something completely generative. It's physics and light and whatnot vs code and compilation. Either way there are frame rate caps past which you get negative gains, it's just that even now with it apparently being the "current thing" most people still don't have high refresh rate monitors, a lot of people outside of enthusiast bubbles don't even have higher than 60htz monitors. So most people don't actually know the difference in practice they just repeat what seems common sense to them.

When it comes to old vs new movies it's not even the technology that separates them it's how the older generations prioritized beauty so they took their time with tweaking and put more effort into everything. The newer one's prefer flash and fast profits so they ironically pump money and rush everything as well as appearing to hate beauty because beauty is expensive to maintain and takes time. Nvm how inherently political everything is these days even when it pretends not to be.

With "fake frames" you have more something like filters over filters depending on how you define "fake". I just don't get the religious obsession people have with promoting AI, sure it can be useful but it's not some kind of savior. You can never completely replace actual work with lazy automation. I guess the same brain circuits gets activated that used to get activated with slavery (which also was lazy automation replacing work from the pov of the slaveholder) and the most bizarre justifications will be used as if it's life and death.
 
Eh, it's IMHO not fair to make that comparison.

Traditional rendering has a lot of "tricks", but it's clever algorithms, optimization, approximations. It's reproducible and controllable, rather than the hallucinatory framegen and upscaling BS that can only regurgitate what it's already been fed, and visual styles it has been trained on.
Eh, texture compression has artifacts you don't necessarily want and can't always do much to control. Same with TAA. There are plenty more examples, I'm sure. A lot of the hacks in raster graphics have cases where they don't look right and there's not anything the developer can easily do to fix that.

I can appreciate having apprehension at the idea of some future DLSS version doing something different than how a game looked in testing, but DLSS is a feature that customers can disable, if they don't like how it looks. If a game dev really doesn't like what DLSS is doing, they don't have to support it at all.

It's a lazy cop-out, and if this is what the industry keeps betting on... ugh.
But consumers are buying 4k monitors, like maybe for productivity purposes, even though they don't really need all of that detail when gaming. Are you really saying games need to render all of those pixels? When using conventional scaling methods, you generally get either a soft image or amplification of artifacts. DLSS is just saying: "instead of trying to hand-code some heuristics that deal well with certain types of images at the expense of looking bad on others, let's take a data-driven approach to find a set of heuristics that are more generally positive."

Instead of taking an ideological position against it, why not just look at the results! The end product is what you see. So, if it looks good, it is good!

Even if all the visual artifacts get solved, we'll end up with even blander and samesy-looking games than we've got now.
Huh? At the end of the day, it's just a scaling (and frame interpolation) technology. The overall look of a game is still fully in the hands of its creators!

If we just accept the hallucinatory ML cop-out, innovation is dead.
Are you still talking about DLSS, because that doesn't make any sense to me. Now, if you're talking about neural rendering, I suppose I can at least understand why you might think that.