News AMD to present Neural Texture Block Compression in London — rivals Nvidia's texture compression research

bit_user

Titan
Ambassador
The article said:
For better or for worse, AI seems to be the future of innovation in the gaming and graphics world.
This seems like a pure win, if it can be implemented in hardware with similar efficiency as conventional texture compression! Given that textures are compressed at all, wouldn't you want the most efficient (i.e. best PSNR per bit) compression possible?
 
At this point these are just research papers and neural compression for textures isn’t a feature available in any game yet.

NVIDIA’s Neural Texture Compression still hasn’t found its way in any game, though this might change in future iteration of the DLSS tech, but that's just an assumption for now.

Intel also followed up with a paper of its own that proposed an AI-driven level of detail (LoD) technique that could make models look more realistic from a distance, but this never panned out.

https://www.intel.com/content/www/u...eural-prefiltering-for-correlation-aware.html

Although, AMD didn't mention the benefits of its texture compression technology compared to conventional techniques, the only advantage AMD's tech might have over NVidia's neural texture compression is that it will be quite easy to implement.

Because AMD's tech uses unmodified "runtime execution" which can allow for easy game integration, but this also does not come without its own caveats. AMD is trying to catch up with NVIDIA in AI-based neural techniques, but it's not going to be an easy feat.
 
Last edited:

bit_user

Titan
Ambassador
How is this faster than just higher VRAM bandwidth and size?
If you can get more texels/s from whatever VRAM size & speed you have (and keep in mind this applies to not just the flagship models), how is that not a good thing?

I sort of wonder whether this might be like a 2-tiered compression scheme. Like, what if there's a texture cache, and as textures are paged in, the "neural block compression" is removed to convert it to some regular block-based texture compression scheme. Because the thing about inferencing is that it's a lot more compute intensive than standard texture lookups, and if this scheme supposedly works on existing hardware, then it must be implemented via shaders and the only way that's probably fast enough is if you're not doing it directly inline with all your texture lookups.

Let's not forgot that texture lookups are so compute-intensive that they're one of the few parts of the rendering pipeline that are cast into silicon.
 
  • Like
Reactions: umeng2002_2