News Nvidia Uses Neural Network for Innovative Texture Compression Method

From the blog post, it appears that NVIDIA’s algorithm is representing these in three dimensions, tensors like a 3-D matrix . But the only thing that NTC assumes is that each texture has the same size which can be a drawback of this method if not implemented properly.

That's why the render time is actually higher than BC, and also visual degradation at low bitrates. All the maps need to be the same size before compression, which is bound to complicate workflows, and algorithm's speed.

But i's an interesting concept nonetheless. NTC seems to employ matrix multiplication techniques, which at least makes it more feasible and versatile due to reduced disk and/or memory limitations.

And with GPU manufacturers being strict with VRAM even on the newest mainstream/mid-range GPUs, the load is now on software engineers to find a way to squeeze more from the hardware available today. Maybe we can see this as more feasible after 2 or 3 more generations.

Kind of reminds me of PyTorch implementation.
 
Last edited:

Sleepy_Hollowed

Distinguished
Jan 1, 2017
512
200
19,270
While cool, I don't see this being money or time spent by game devs, unless nvidia or someone takes it upon themselves to make it transparent to the developers without causing any issues.
The schedules for games are already insane, and some huge budget titles either get pushed back a lot, or worse, get released with a lot of issues on different systems (cyberpunk anyone?), so I don't see this being adopted until it's mature or nvidia is willing to bleed money to make it a monopoly, which is always possible.
 
I'm imagining the holy grail of this would be just have the AI procedurally generate a texture.
That sounds nice, but the phrase a picture is worth a 1000 words works against us here.

Without a very specific set of parameters for the AI to follow everyone would get a different texture.

I can type 99 red balloons into Stable diffusion and get 99 different sets of 99 red balloons, each one slightly different from the last.
 
That sounds nice, but the phrase a picture is worth a 1000 words works against us here.

Without a very specific set of parameters for the AI to follow everyone would get a different texture.

I can type 99 red balloons into Stable diffusion and get 99 different sets of 99 red balloons, each one slightly different from the last.
And that would be fine for certain things where randomness is expected, like say tree bark, grass, ground clutter, dirt or other general messiness.
 

bit_user

Polypheme
Ambassador
Meanwhile, texturing techniques have not really advanced at a similar pace mostly because texture compression methods essentially remained the same as in the late 1990s, which is why in some cases many objects look blurry in close proximity.
LOL, wut? Were they even doing texture compression, in the late '90s? If so, whatever they did sure wasn't sophisticated.

Since then, I think the most advanced method is ASTC, which was only introduced about 10 years ago.

More to the point, programmable shaders were supposed to solve the problem of blurry textures! I know they can't be used in 100% of cases, but c'mon guys!

Nvidia claims that NTC textures are decompressed using matrix-multiplication hardware such as tensor cores operating in a SIMD-cooperative manner, which means that the new technology does not require any special purpose hardware and can be used on virtually all modern Nvidia GPUs.
Uh, well you can implement conventional texture compression using shaders, but it's not very fast or efficient. That's the reason to bake it into the hardware!

in complex scenes using a fully-featured renderer, the cost of NTC can be partially offset by the simultaneous execution of other tasks
It's still going to burn power and compete for certain resources. There's no free lunch, here.

I think they're onto something, but it needs a few more iterations of development and refinement.
 

bit_user

Polypheme
Ambassador
I'm imagining the holy grail of this would be just have the AI procedurally generate a texture.
I joked about this with some co-workers, like 25 years ago. Recently, people have started using neural networks for in-painting, which probably means you might also be able to use it for extrapolation!

tl;dr NVIDIA wants industry to move towards its specific tech part of market.
AI is _really_ good at optimizing, though. I think they're onto something.

Also, texture compression has been key for mobile gaming, where memory capacity and bandwidth are at a premium. Any improvements in this tech can potentially benefit those on low-end hardware, the most.
 

bit_user

Polypheme
Ambassador
That sounds nice, but the phrase a picture is worth a 1000 words works against us here.

Without a very specific set of parameters for the AI to follow everyone would get a different texture.
You're thinking of image synthesis. Deep learning has been investigated for image & video compression, for a while. The idea behind using it for texture synthesis would be similar - to get predictable and repeatable outputs. It'd be a lot closer to compression than synthesis.

Plus, techniques like Stable Diffusion are many orders of magnitude too slow to use for texture generation, in realtime rendering. Also, I'm not sure how well they could be adapted to do piecemeal synthesis, which would be essential for realtime texture generation.

And that would be fine for certain things where randomness is expected, like say tree bark, grass, ground clutter, dirt or other general messiness.
Whatever technique were used would have to look virtually identical from frame-to-frame. And achieving that can't involve saving virtually any state between frames. All of which points to a fairly deterministic method.
 
Last edited:
Whatever technique were used would have to look virtually identical from frame-to-frame. And achieving that can't involve saving virtually any state between frames. All of which points to a fairly deterministic method.
I'm under the impression that AI generation isn't truly random, that is, if you feed it the same exact inputs, you'll always get the same output.

But even then you could still use AI to generate textures once and store the result in VRAM or cache it in system RAM.
 

bit_user

Polypheme
Ambassador
I'm under the impression that AI generation isn't truly random, that is, if you feed it the same exact inputs, you'll always get the same output.
It's not inherently random. Any form of digital computation is deterministic (i.e. if not plagued by race conditions). However, certain techniques do benefit from the injection of a random seed.

Using the same seed will produce the same output. But, if you have to remember the seed you used to avoid problems like shimmering (or worse), then it makes the technique more expensive and introduces another datastructure the game engine has to manage.

But even then you could still use AI to generate textures once and store the result in VRAM or cache it in system RAM.
No, that's much too big and slow. Key objectives of texture compression are both to reduce memory bandwidth utilization and the memory footprint of textures. Furthermore, generating and storing a whole image is very inefficient, if only part of it is seen or if it's seen at a much coarser level-of-detail (as is often the case).