News Nvidia Hints at DLSS 10 Delivering Full Neural Rendering, Potentially Replacing Rasterization and Ray Tracing

Status
Not open for further replies.
Maybe i went through this piece of news faster than i should, but i don't think i read anything about potentially replacing Ray Tracing. Unless, of course, this kind of development is implied as a sort of natural evolution, in the process of improving graphics quality.

In any case, i very much like what Nvidia has done so far with DLSS and RT, and i'm sure it will only get better by time.
 
Maybe i went through this piece of news faster than i should, but i don't think i read anything about potentially replacing Ray Tracing. Unless, of course, this kind of development is implied as a sort of natural evolution, in the process of improving graphics quality.

In any case, i very much like what Nvidia has done so far with DLSS and RT, and i'm sure it will only get better by time.
Either the generated graphics don't need raytracing, or the neural engine uses raytracing hardware to create the graphics.

But while I agree that their advances are great, they need to contribute to the general ecosystem to make all gamers enjoy at least part of the benefits, or work with some existing standards. It's not fun to play Batman Arkham games and seeing that toggle "Enable PhysX" disabled, because I don't have an Nvidia card. How would games be today if PhysX were supported in every GPU from the start? And I don't want that for DLSS 10 or whatever.
 
I think it will replace ray tracing and shaders. I see game developers providing assets, objects and a story and DLSS 10 generating 70 - 120 Stable Diffusion (or probably more advanced AI algorithms) frames per second.

I think this is the future. I think it will look far more realistic and natural than shaders and hardware ray tracing can hope for while simultaneously needing less growth in transistors and power budgets to scale up performance.

Interestingly, using AI algorithms to generate the image stream is also going to make it easier for other companies than Nvidia, AMD, and Intel to provide gaming hardware.
 
>Maybe i went through this piece of news faster than i should, but i don't think i read anything about potentially replacing Ray Tracing. Unless, of course, this kind of development is implied as a sort of natural evolution, in the process of improving graphics quality.

Yeah, note the DLSS 10. Assuming linear progress, it took ~4.5 years from 1.0 (Feb 2019) to 3.5 now, and another 6.5 iterations would take another ~8+ years. But progress isn't linear (for AI), so if the '10' holds true, I'd say a bit sooner than that.

Anyway, the sentiment coming from an Nvidia VP of "Applied Deep Learning Research" is just one more indication of where Nvidia is heading for its GPUs--more AI, less brute power.

>In any case, i very much like what Nvidia has done so far with DLSS and RT, and i'm sure it will only get better by time.

I'm excited more for the non-graphical possibilities of AI in games. More intelligent NPCs for starters...better combat AI...no more canned convos...the list is endless.
 
But while I agree that their advances are great, they need to contribute to the general ecosystem to make all gamers enjoy at least part of the benefits, or work with some existing standards.

That would be nice, if DLSS wasn't one of those features that give Nvidia the edge over its competitors.

We all want this to happen, but, unfortunately, it's not a reasonable thing to expect.
 
  • Like
Reactions: salgado18
At that point wouldn't it be inaccurate to call it DLSS if it's a rendering engine? (maybe DLR?)

I hope they get it stable enough before trying to ship it as such, otherwise we'll have a bunch of psychedelic releases full of AI hallucinations.
 
  • Like
Reactions: helper800
Maybe i went through this piece of news faster than i should, but i don't think i read anything about potentially replacing Ray Tracing. Unless, of course, this kind of development is implied as a sort of natural evolution, in the process of improving graphics quality.

In any case, i very much like what Nvidia has done so far with DLSS and RT, and i'm sure it will only get better by time.

They applied that subtly. If you create a image completely rendered by AI, the AI will duplicate what path tracing does naturally, and render the image to mimic pathtraced/raytraced images.

Of course, you can probably have AI reference an actual image rendered with real path tracing/ray tracing, but that's not very efficient when the AI can be taught to mimick RT all on its own.

Ray Reconstruction in a way already does this...
 
  • Like
Reactions: P.Amini
That would be nice, if DLSS wasn't one of those features that give Nvidia the edge over its competitors.

We all want this to happen, but, unfortunately, it's not a reasonable thing to expect.
Like with most things Nvidia are the first-mover on and heavily invest in, it will inevitably roll out elsewhere as that knowledge becomes public and demand is demosntrated. GPGPU (CUDA), variable monitor refresh (G-Sync), etc. May take time, but even now after dragging their feet for a while AMD are starting to implement hardware RT acceleration and deep-learning based resampling, just as they followed suit on GPGPU and VFR.
 
If such tech aims to replace rendering, then it better have a path to provide backwards compatibility.

It was a thing on my mind when ray tracing finally hit the scene. It only gets better the more resources you throw at it, but we may hit a point where we have more ray tracing resources that rasterization performance stops improving.
 
  • Like
Reactions: Bamda
Hopefully this kind of tech allows for games that use an accurate representation of the entire Earth as the setting. I know MSFS does this but I'm talking about an open world type game.
 
no specific GPU brands proprietary thign should EVER become the dominant requirement.

Thats basically forcing devs to pick which company its game will work with and shafting other.

Thats how you basically becoem a monopoly in future.
How do you think graphics cards progressed in the 1990s-2000s?

SLI was originally a VOODOO feature.

AGP was first featured on a Pentium 2 motherboard.

AMD/ATI was the first with tessellation.

Whatever works gets copied and is used till it is no longer relevant.

The company that comes up with the idea first gets a head start.
 
Last edited:
The funny thing about the comments is that they talk about hardware and software ray tracing, when hardware can't work without software and software can't work without hardware. Actually, what I wrote is not quite true. I can take multiple mirrors and different lenses and direct, track, focus and split light rays into their components.
 
no specific GPU brands proprietary thign should EVER become the dominant requirement.

Thats basically forcing devs to pick which company its game will work with and shafting other.

Thats how you basically becoem a monopoly in future.
And this is how competition works: your competitor make a thing that blows everyone else out of the water and either you can find a way to do the same thing just as good at the bare minimum, or you get left behind in the dust.

People shouldn't be forced to dumb themselves down because their competition can't keep up. Imagine telling Usain Bolt to slow down because he's too good at sprinting.
 
  • Like
Reactions: Bamda
Talk about joining the bandwagon of bad naming.

DLSS 2, DLSS 3, DLSS 3.5 (really 2.5) and now we are going to 10.
This isn't saying that NVIDIA is jumping to DLSS 10 right away. This is just a conversation piece of what the future would look like and they just picked a number that seemed far enough.

So I wouldn't even against them that this tech, if it ever matures enough, doesn't land on "DLSS 10"
 
  • Like
Reactions: Bamda and P.Amini
An honest question:

Is all this "neural", artifical software stuff a stop-gap to backfill lack of hardware growth?

Haven't seen much news on where Nvidia is headed with enthusiast hardware next. Nobody has.

It's all AI these days. Where does Neural AI meet the actual hardware? No more groups of RT transistors in hardware? RT hardware plus neural software? Different hardware for neural and no more RT? Stop insulting our intelligence Nvidia..

Previous posts mention that this stuff arises from time-to-time as new features but this seems more like "we're just trying to keep up with...something".

Opinion: Can y'all pick a term other than "neural"? It makes me think of Elon Tusk playing with primates. Is the joke on us now?
 
Last edited:
An honest question:

Is all this "neural", artifical software stuff a stop-gap to backfill lack of hardware growth?

Haven't seen much news on where Nvidia is headed with enthusiast hardware next. Nobody has.

It's all AI these days. Where does Neural AI meet the actual hardware? No more groups of RT transistors in hardware? RT hardware plus neural software? Different hardware for neural and no more RT? Stop insulting our intelligence Nvidia..

Previous posts mention that this stuff arises from time-to-time as new features but this seems more like "we're just trying to keep up with...something".

Opinion: Can y'all pick a term other than "neural"? It makes me think of Elon Tusk playing with primates. Is the joke on us now?

Hardware development is ... hard.

For the last 40 years the goal was simple ... make our transistors smaller and more efficient.

This has worked splendidly until now when our transistors are getting close to a single order of magnitude in size from large atoms.

A neural network is the hardware that the "AI" or CHATGPT, Diffusion Models or even the human brain run on.

With Nvidia RTX graphics cards this hardware consists of CUDA cores, Tensor Cores and RT Cores.


RT cores and tensor cores are indeed separate things for different purposes.

The reason why ray tracing and artificial intelligence appear to be linked is because both of them require hardware capable of doing 100s of trillions if not more operations per second.

Raytracing requires this for real life graphics.
AI requires this for training the model for creating the links between the literal ideas and concepts in an attempt to mimic what the human brain does naturally.

It is quite ironic how you used your own neural network to tell us to pick another term for neural network 😛
 
Last edited:
Opinion: Can y'all pick a term other than "neural"? It makes me think of Elon Tusk playing with primates. Is the joke on us now?
"Nerual" was derived from neurons, the thing that makes up our nervous system and what our brains are made of. The goal of AI is to mimic how brains work, which is a network of neurons and the adjective for nervous system functions is "neural", hence "neural network." The term is way older than Elon's relevancy. Artificial neural networks were brought up as an idea in the mid 20th century.

If you can think of something better, I'm all ears.
 
  • Like
Reactions: P.Amini
Nothing but more “AI” buzzword mumbo-jumbo.

The wheat is separated from the chaff by who understands that and who doesn’t (or is too nervous to admit they do). And this thread is certainly no exception!
 
Status
Not open for further replies.