News Nvidia's next-gen DLSS may leverage AI — tech will be able to generate in-game textures, characters, and objects from scratch

D

Deleted member 2731765

Guest
Nvidia's next-gen DLSS may leverage AI

DLSS has always leveraged AI by the way. So word the title accordingly.

"Nvidia's next-gen DLSS may leverage AI to generate in-game assets, objects and NPCs from scratch".


But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.

If used in DLSS, then we could be looking at a future version 4 or 5 here. *speculation*

Q: AI has been used in games for a while now, I’m thinking DLSS and now ACE. Do you think it’s possible to apply multimodality AIs to generate frames?

A: "AI for gaming - we already use it for neural graphics, and we can generate pixels based off of few input pixels. We also generate frames between frames - not interpolation, but generation. In the future we’ll even generate textures and objects, and the objects can be of lower quality and we can make them look better.

We’ll also generate characters in the games - think of a group of six people, two may be real, and the others may be long-term use AIs. The games will be made with AI, they’ll have AI inside, and you’ll even have the PC become AI using G-Assist. You can use the PC as an AI assistant to help you game. GeForce is the biggest gaming brand in the world, we only see it growing, and a lot of them have AI in some capacity. We can’t wait to let more people have it."


Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.

https://research.nvidia.com/labs/rtr/neural_texture_compression/assets/ntc_medium_size.pdf
 
Last edited by a moderator:
  • Like
Reactions: KyaraM

CmdrShepard

Prominent
BANNED
Dec 18, 2023
531
425
760
All these decades in steady improvements until we reached almost fully photo-realistic rendering in games, all those gigabytes of textures, highly detailed 3D models, accurate mocap and lipsync... and now we are throwing all that out for some fake AI hallucinated frames?

Let me be the first to say -- NO THANKS.

That video above looks horrible to me, and any new games using these new AI gimmicks for "reducing load on CUDA cores" which I was dearly paying for generations ever since 8800 GTX will be on my hard pass list.

I am not against use of AI for improving NPC personas (would be great for RPGs), but I don't want fake visual crap.
 

ivan_vy

Respectable
Apr 22, 2022
194
209
1,960
looks like a fever dream, won't it compromise the creators' vision? like AI photo-coloring, looks great but sometimes it chose the wrong color.
I'm more for it for content creation and assets compression, but for rendering ...mmm...I think it needs a few more generations.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
DLSS has always leveraged AI by the way. So word the title accordingly.
...to the extent that people use AI and Deep Learning interchangeably, yes. I had the same thought.

But anyway, this is the actual Q&A snippet. Huang still was not very clear whether this tech will be included in next-gen version of DLSS , or will it be separate AI tool for gaming.
It sounds to me like something fundamentally different than DLSS.

Though, I'm more inclined towards the Neural Texture Compression (NTC) solution being used here as well.
That paper didn't sound terribly practical, IMO. Texture lookups are higher-frequency than the rate at which DLSS interpolates pixels, so I don't know if it's a big win to put a lot more computation in that phase. You also need to make the model small enough that it's not going to generate more memory traffic than it saves by increasing texture compression ratios.

That gets at a broader concern I have around this AI-generated content, which is the size of the models needed to generate convincing assets. These seem like they'd chew up a lot of memory and hardware bandwidth, if they're being run mid-gameplay (i.e. as opposed to being limited to level loading).

Either way, I think it's not right around the corner, but maybe something that starts to happen in 3-4 years.
 
  • Like
Reactions: KyaraM

bit_user

Titan
Ambassador
looks like a fever dream, won't it compromise the creators' vision?
Yeah, it will need to provide creators with enough control, but I guess big game publishers are known to be cheap. So, even if it doesn't have quite the degree of control they'd like, I'm not sure that'll keep it from being adopted by some.

In terms of realism, I believe that much will need to be competitive with manually-crafted assets.
 
  • Like
Reactions: ivan_vy

valthuer

Upstanding
Oct 26, 2023
132
141
260
so nvidia wants to create more fake stuff, like the fake frames of DLSS 3 ?

Oh, please. What is real anyways? After all, we ‘re talking about virtual environments, for God’s sake.

You ‘re living in a world with Anisotropic Filtering reducing texture pixel counts, heterogenous deferred shading reducing lighting pixel counts, Z-culling reducing rendered pixel counts, MSAA reducing rendered pixel counts (over SSAA), TSAA and other shader-based AA techniques reducing pixel counts (over MSAA), anisotropic pixels reducing pixel counts (e.g. Wipeout using variable pixel widths to raise and lower per-frame render loads to maintain 60FPS in varying environments), Variable Rate Shading reducing pixel counts dependant on screen content, screen-space reflections reducing rendered pixel counts by just duplicating rendered pixels, probe reflections reducing rendered pixels by just copying from a texture, and so on.

Game engine optimisation is all about finding places where you can outright avoid doing work wherever possible. It's 'faking' all the way down.

It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?
 

bit_user

Titan
Ambassador
It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.

If you have a good upscaling and sharpening model that looks better than native plus TAA, or at least close enough to be equivalent, then what's the problem? Especially if it boosts performance by 30–50 percent?
You're singing my tune!

I maintain that every pixel at 4k is not precious. Most 4k monitors are too small for that resolution to really add much value to the gaming experience, yet a lot of people are moving that way on the resolution scale (often probably for non-gaming reasons). So it makes sense to use more approximations, interpolations, etc. to fill in those extra details.

More to the point: the proof of the pudding is in the eating. If the end user finds technologies like DLSS 3 yield a better experience than going without, they'll use them. And what's wrong with that? I use motion interpolation on my TV, in spite of the occasional artifact, because the overall image quality is a lot better.
 
It's why I hate the "fake frames" BS spouted by people as a way to dismiss DLSS and upscaling as a whole. Every pixel rendered is "fake" to varying degrees.
DLSS 3 isn't upscaling it's frame generation which is where the "fake frames" commentary comes from.

I do think there's a lot of value to be had with frame generation technologies, but it's being pitched all wrong. For a good implementation it can make games at high detail look really good so long as your minimum frame rate is good enough. It can't make up for poor performance due to input lag, but it can make something that can run 120 FPS natively even better.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
I do think there's a lot of value to be had with frame generation technologies, but it's being pitched all wrong. For a good implementation it can make games at high detail look really good so long as your minimum frame rate is good enough. It can't make up for poor performance due to input lag, but it can make something that can run 120 FPS natively even better.
Yes, it's an enhancement, not an enabler!