News Nvidia Hints at DLSS 10 Delivering Full Neural Rendering, Potentially Replacing Rasterization and Ray Tracing

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
An honest question:

Is all this "neural", artifical software stuff a stop-gap to backfill lack of hardware growth?

Haven't seen much news on where Nvidia is headed with enthusiast hardware next. Nobody has.

It's all AI these days. Where does Neural AI meet the actual hardware? No more groups of RT transistors in hardware? RT hardware plus neural software? Different hardware for neural and no more RT? Stop insulting our intelligence Nvidia..

Previous posts mention that this stuff arises from time-to-time as new features but this seems more like "we're just trying to keep up with...something".

Opinion: Can y'all pick a term other than "neural"? It makes me think of Elon Tusk playing with primates. Is the joke on us now?

The people trying to argue with you are doing a bang-up job of proving you’re right.

This is it exactly, in other words.

No one knows how NVIDIA is going to deal with hard limits on the methods of improvement that have kept NVIDIA growing as a company… least of all NVIDIA.

So, they’re doing what tons of other flailing companies are doing: pivoting to buzzwords like “AI” and its close kin as a smokescreen. None of these companies know how any of this stuff would help. They’re casting about for vague promises to make to shareholders and the industry press, which they rightly believe will be all but forgotten in a year’s time.

It was NFTs before this. Now it’s this nonsense. And in six to twelve months, they’ll be on to the new nonsense. But those limits will still be there, company valuation or no.
 
  • Like
Reactions: Thunder64
This isn't saying that NVIDIA is jumping to DLSS 10 right away. This is just a conversation piece of what the future would look like and they just picked a number that seemed far enough.

So I wouldn't even against them that this tech, if it ever matures enough, doesn't land on "DLSS 10"
Yeah but that's like us talking about the Nvidia RTX-104090 Ultra Deluxe Premium Super Duper Edition for the release year 2035 now in 2023.
 
I think AI is too much involved in game graphics nowadays. Im a graphics fidelity nerd ever since i began pc gaming in 1996 but i never thought real time raytracing was ever possible. Kudos for nvidia but i feel the “fake” AI frame generation intervention for making the dream posdible is too much. Im still an old skool raw horsepower gpu kind of guy. So im heading back to rasterisation and hybrid raytracing. Full ray tracing will be too much AI intervention for me…
 
  • Like
Reactions: George³
Back in the 1990s, being in IT, I thought why don't they make GPUs handle each frame as just one IMAGE, then all it has to display is just one IMAGE at a time? (That sounds like what the article is pointing to, in the future....). neato

Just now; I think that would take too much horsepower though - so I wonder what would happen if they make GPUs treat each pixel as an IMAGE versus being just a 'picture element', then display those pictures all at once? The tiny images would not have to hold a ton of information because they are so small, so it should be easy for a GPU to display a million-plus tiny images at once, no... or am I gonna burn my lips on a crack pipe here?
 
It's actually rather madness, as im pretending to get teary eyed due to a lack of technical knowledge... frame regeneration makes a little bit of sense but dlss in short, placing prediction in front of you at a scale seems counter intuitive by any means authenticity of what you are experiencing and specifically practicing. You're essentially just making it up as it goes......... with out the dev graphic artists? it doesnt make much sense.... creating ray tracing and path tracing components of cards made a little bit of sense also, but as with my 3ds max skills from highschool, meta data and my liberal experience online, a cme to solar flare would end up being the reason* for saving the planet from corruption because we're more fit to see in light* rather than fit to see by "machine learning"... eventually in order to pay paul, they'll just be pulling quantumly from the internet of preview releases of the video game, and the one with latest technology will win using his toes on a dpad.... I don't like dlss, and if i have any moral integrity, making an artist's life harder is not only what we should be doing to stay employed in the future, but again authenticity of a product by an artist would be more virtuous to be presented as their* masterpeice*... and the rest of work for the general public would more formally be aided by AI. Wait till you see engines having nvidia corner-poly tagging for fairness against your opponents and accuracy, all maladaptive properties of such, well, intellectual properties........
 
Status
Not open for further replies.