News Nvidia's new tech reduces VRAM usage by up to 96% in beta demo — RTX Neural Texture Compression looks impressive

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Did you happen to notice how small the object ie cost of all texture lookups will be proportional to the amount of textured pixels drawn per frame. A better test would've been a full frame scene with lots of objects, some transparency, and edge AA.
It is small, but also saves 1/4 GB vram if I am reading it right. So if it took 10x as long to decompress 10x as much vram that should save about 2.5GB vram and not mess with frametimestoo much up to the triple digits. How do you think a 8GB card would do in those vram limited situations with 10.5 GB?
 
Why am i inclined to think that this will be used to justify less VRAM on GPU's?

Look at what our tech does. You dont need all that VRAM anymore.

They save cost on production, selling less for more, and we are none-the-wiser.

Nah, i am not a cynic. But i can spot greed when i see it.
 
  • Like
Reactions: Thunder64
Isn't the whole point of using compression to lower VRAM usage to, you know, make the game run better, or to make it look better?

So it's a feature that lowers frame rate and maybe visual fidelity to free up memory.. I can then use that empty memory to do what, exactly?
It would be useful on cards with not enough VRAM where the card ends up VRAM limited before it's performance limited. The other option is to increase the resolution and quality of the textures because you can fit more in the same amount of VRAM.
 
Those tech demos don't impress me, they are just fancy way if showing best case scenario that exaggerates benefits to oblivion.

Let's see it in real game. Because we don't okay benchmarks like 3D Mark Timespy or FurMark,... We play games. Like put it to test in latest Indiana Jones, or Akan Wake 2 or whatever demanding game.

To be fair, I am all for it working without any visual downgrade. But I want to see it on actual game that has playerbase. Not tech demo.
 
  • Like
Reactions: KyaraM and bit_user
It would be useful on cards with not enough VRAM where the card ends up VRAM limited before it's performance limited. The other option is to increase the resolution and quality of the textures because you can fit more in the same amount of VRAM.
The question is why a (future) graphics card that is even not introduced yet (let alone available) that is not cheap uses not enough RAM in the first place.
 
Hey I had just now this awesome great ideia that they didn't realize... what about just giving players more friggin VRAM in the GPUs???????
Since they are talking about 16:1 compression, you're asking to go from 16G 5080 to 256G 5080s. Are you ready to pay ~$40-50K for your next GPU?
 
Isn't the whole point of using compression to lower VRAM usage to, you know, make the game run better, or to make it look better?

So it's a feature that lowers frame rate and maybe visual fidelity to free up memory.. I can then use that empty memory to do what, exactly?
To make it look better. This feature trades visual fidelity per texture against number of textures that can be used. More textures -> looks better.
 
  • Like
Reactions: KyaraM
Are you willing to pay even more for more high-speed VRAM? It's not reasonable. Instead of increasing the amount of VRAM it would be much more logical to change the algorithms of compression of game textures.
In computing, there's often a speed vs. space tradeoff. This Neural compression scheme is a classic example, where you're using a better compression format that requires more computation, in order to save on the space used by the data.

It would've been nice if they'd used a more realistic example, so we could get a better sense of the impact on DLSS performance. That's perhaps the most troubling aspect of this new technique.
 
It is small, but also saves 1/4 GB vram if I am reading it right. So if it took 10x as long to decompress 10x as much vram that should save about 2.5GB vram and not mess with frametimestoo much up to the triple digits. How do you think a 8GB card would do in those vram limited situations with 10.5 GB?
For the lack of a quality data, I'm unwilling to attempt any estimates or projections. Even what I managed to find from Nvidia's documentation for developers doesn't really tell us what level of compression they achieve with traditional block-based techniques, which is what we'd need to know for establishing a baseline. Then, we'd need good estimates on how much space their neural compression can save at the same fidelity, which is another dimension this little benchmark did nothing to elucidate. Finally, we'd need to know how much VRAM a given game is using to store static textures, at a given resolution and preset.

That is a metric ton of unknowns. IMO, a better way to go would be to wait until some game devs start playing with it and tell us what sorts of impacts it's having in their games.
 
  • Like
Reactions: rluker5
Knowing Nvidia we will probably see some games in the future that need a certain version of this, that just works with their latest cards, that need 17+GB of vram for ultra textures if it isn't enabled.
 
Spend less money on VRAM chips, so more of the card costs can be diverted to the GPU die, which needs to be more expensive to run that VRAM compression tech.

nVidia is at 200% scam mode.
 
More tech that we did not need...

We already have Direct Storage from NVME, meaning the GPU can fetch texture data straight from the disk.

Can we please stop inventing stupid stuff that nobody asked for and everyone pays for?

FPS number go up, (V)RAM number goes down... this is about the reach of the average gamer and they don't even play these games!

We have showered and sprinkled stupid features on top of games to make them "better", when viewed from a stationary camera perspective. All these features look like motion blur at the end of the day... stop it.

Stop covering nonsense!
 
No matter what Nvidia does, it is always the same. Frame generation? Fake frames. Lossless scaling? AFMF? Soo amazing and game changing.

I think its become apparent that Nvidia owners spend most of their time gaming, and AMD owners spend more of their time all about sharper text and comment fluidity for their social media negativity. No a new AMD GPU will not load the comment section faster.

no one mentioned AMD in the comment section until this post. why are you so offended people, who are in general unhappy with NVidia are expressing that dissatisfaction by expressing their skepticism of this technology?
are you getting paid by NVidia to defend them? I hope so. Or at the bare min I hope you own some NVidia stock. Else you're doing unpaid work for their marketing department.

We're all used to NVIDIA's MO at this point. Take a problem solved by 3rd party software. turn it into a hardware solution with NEW software solution that doesn't quite work as well or easily as the prior solution but which shows marginal gains on nvidia hardware over the prior solutions. then let their marketing department sell this proprietary solution as if it's a "game changer", as if software solutions didn't already exist. they started this with PhysX, RayTracing, DLSS, Upscaling, all solutions which had 3rd party solutions that worked fine, all which created a "solution" which offered marginal improvement of performance on the nvidia hardware, with a clunky software implementation.

Of course we need to blame AMD for falling into nvidia's pace with this. if amd was thinking straight they would have laughed at ray tracing. a tech while has only noticeably been implemented in modded cyberpunk and Forza 8, and just worked on pumping raster. but amd is the foil who keeps on giving.
 
Last edited:
More tech that we did not need...

We already have Direct Storage from NVME, meaning the GPU can fetch texture data straight from the disk.
That actually does still go through the host memory & OS. It's just a more streamlined path than the data would normally take.

I think DirectStorage helps, but the main thing it does is to reduce the downside of not loading everything up-front. When even that isn't good enough, better texture compression is preferable to heavy swapping. One way to think of this is an option for burning a bit more compute capacity of the GPU, in exchange for better texture fidelity. Because the alternative of using higher-compression with block-based textures is that you'd sacrifice quality.
 
  • Like
Reactions: KyaraM and P.Amini
no one mentioned AMD in the comment section until this post. why are you so offended people, who are in general unhappy with NVidia are expressing that dissatisfaction by expressing their skepticism of this technology?
are you getting paid by NVidia to defend them? I hope so. Or at the bare min I hope you own some NVidia stock. Else you're doing unpaid work for their marketing department.

We're all used to NVIDIA's MO at this point. Take a problem solved by 3rd party software. turn it into a hardware solution with NEW software solution that doesn't quite work as well or easily as the prior solution but which shows marginal gains on nvidia hardware over the prior solutions. then let their marketing department sell this proprietary solution as if it's a "game changer", as if software solutions didn't already exist. they started this with PhysX, RayTracing, DLSS, Upscaling, all solutions which had 3rd party solutions that worked fine, all which created a "solution" which offered marginal improvement of performance on the nvidia hardware, with a clunky software implementation.

Of course we need to blame AMD for falling into nvidia's pace with this. if amd was thinking straight they would have laughed at ray tracing. a tech while has only noticeably been implemented in modded cyberpunk and Forza 8, and just worked on pumping raster. but amd is the foil who keeps on giving.
I hate nVidia's pricing and greed, this new generation (50 series) is more like a sidegrade than a real upgrade and even then there is very little supply which raise the already stupid prices and I really hate all of it BUT I like ray tracing, I've been reading about it way before Nvidia's RTX cards and I was/am/will be excited about it. Normally ray tracing takes so much power but what Nvidia has done is amazing to reduce needed power and it still is pushing that tech. It is very young but I really like it as a technology. I also like what Nvidia is doing with AI, I think DLSS upscaling is really an amazing tech and useful. FG and MFG I'm not a big fan but one day maybe they can turn into something useful. I like tech and innovation, that's why I'm here.
 
  • Like
Reactions: KyaraM and bit_user
I hate nVidia's pricing and greed, this new generation (50 series) is more like a sidegrade than a real upgrade and even then there is very little supply which raise the already stupid prices and I really hate all of it BUT I like ray tracing, I've been reading about it way before Nvidia's RTX cards and I was/am/will be excited about it. Normally ray tracing takes so much power but what Nvidia has done is amazing to reduce needed power and it still is pushing that tech. It is very young but I really like it as a technology. I also like what Nvidia is doing with AI, I think DLSS upscaling is really an amazing tech and useful. FG and MFG I'm not a big fan but one day maybe they can turn into something useful. I like tech and innovation, that's why I'm here.
100% agree on all points. I considered usable RT performance and AI acceleration to be requirements for my next GPU. Intel is intriguing me and I hope they release a bigger Battlemage. Word that AMD has substantially improved on both fronts mean I'm also very likely to consider an AMD, for my next GPU.

For Nvidia, about the only interesting thing in my price range would be the RTX 5070 Ti, if it sold near MSRP. That said, my last GPU was Nvidia and I'd kind of like to support Intel or check out another AMD card. Both are better contributors to open source, and buying either one would support competition and probably give me better value for money (tariffs aside).
 
Last edited:
  • Like
Reactions: P.Amini
100% agree on all points. I considered usable RT performance and AI acceleration to be requirements for my next GPU. Intel is intriguing me and I hope they release a bigger Battlemage. Word that AMD has substantially improved on both fronts mean I'm also very likely to consider an AMD, for my next GPU.

For Nvidia, about the only thing in my price range would be the RTX 5070 Ti, if it sold near MSRP. That said, my last GPU was Nvidia and I'd kind of like to support Intel or check out another AMD card. Both are better contributors to open source, and buying either one would support competition and probably give me better value for money (tariffs aside).
I am looking forward to see what AMD will offer next month. I like to see a B770 as well, there's no reason to think it wouldn't be a great card (Xe3 as well).
 
  • Like
Reactions: bit_user
100% agree on all points. I considered usable RT performance and AI acceleration to be requirements for my next GPU. Intel is intriguing me and I hope they release a bigger Battlemage. Word that AMD has substantially improved on both fronts mean I'm also very likely to consider an AMD, for my next GPU.

For Nvidia, about the only interesting thing in my price range would be the RTX 5070 Ti, if it sold near MSRP. That said, my last GPU was Nvidia and I'd kind of like to support Intel or check out another AMD card. Both are better contributors to open source, and buying either one would support competition and probably give me better value for money (tariffs aside).
I'm very much hoping for a bigger Battlemage card, too. The B580 and B570 both look amazing for their price. Just hope they can somehow get rid of that abyssmal idle power draw...
 
That actually does still go through the host memory & OS. It's just a more streamlined path than the data would normally take.

I think DirectStorage helps, but the main thing it does is to reduce the downside of not loading everything up-front. When even that isn't good enough, better texture compression is preferable to heavy swapping. One way to think of this is an option for burning a bit more compute capacity of the GPU, in exchange for better texture fidelity. Because the alternative of using higher-compression with block-based textures is that you'd sacrifice quality.
Yes and probably introduces AI artifacts then enhanced by AI into a muddy mess.

Great 👍