• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

News Intel Strives to Make Path Tracing Usable on Integrated GPUs

Actually, the MOST important point to be included is that, Intel is working on some of these neural tools that will enable real-time path tracing with 70% - 90% of the compression of native path tracing renderers.

At the same time the neural rendering technique will also offer better performance, due to the compression. Intel has already discussed this in the "Path Tracing a Trillion Triangles" session.

Their method achieves 70%-95% compression compared to "vanilla" path tracing.
 
  • Like
Reactions: Amdlova and ezst036
Intel is working on some of these neural tools that will enable real-time path tracing with 70% - 90% of the compression of native path tracing renderers.
I looked into this more since 70-90% compression implies either a huge amount of redundancy or a huge amount of data being thrown away, or both. And neither makes sense.

So looking into it some more, my best guess is the actual data being presented to the renderer is just enough input parameters to feed an AI to generate the rest of the geometry. And as long as you're not seeding the AI with random data all the time and they're using the same exact model, it should spit out the same result.
 
Intel has already discussed this in the "Path Tracing a Trillion Triangles" session.
I saw that same link on another tech site which covered this story. It doesn't seem to link to the session mentioned, nor can I find it from any of the links on that page. Not only that, I can't find it anywhere on their site, nor on the linked youtube channel.

I think they got that link from Intel's article. My best guess is that it's planned to be one of the sessions at Siggraph 2023. I can't be sure, because they haven't published the 2023 Syllabus.

BTW, that's the official site of the classic text:

rtr4_thumb.jpg


Good book ...and actually not that expensive, if you price it per page or per kg.
; )
 
Actually, the MOST important point to be included is that, Intel is working on some of these neural tools that will enable real-time path tracing with 70% - 90% of the compression of native path tracing renderers.

At the same time the neural rendering technique will also offer better performance, due to the compression. Intel has already discussed this in the "Path Tracing a Trillion Triangles" session.

Their method achieves 70%-95% compression compared to "vanilla" path tracing.
I looked into this more since 70-90% compression implies either a huge amount of redundancy or a huge amount of data being thrown away, or both. And neither makes sense.

So looking into it some more, my best guess is the actual data being presented to the renderer is just enough input parameters to feed an AI to generate the rest of the geometry. And as long as you're not seeding the AI with random data all the time and they're using the same exact model, it should spit out the same result.
If you look at how zip and rar show compression ratios then that 70-95% means that they compress the data by 5-30%
 
I looked into this more since 70-90% compression implies either a huge amount of redundancy or a huge amount of data being thrown away, or both. And neither makes sense.
Just think of it like image compression or video compression. Yes, there's a lot of redundancy, but the real reason they can achieve such high compression ratios is that they're willing to accept lossy compression.

So looking into it some more, my best guess is the actual data being presented to the renderer is just enough input parameters to feed an AI to generate the rest of the geometry.
No, this isn't about compressing geometry - it's for approximating lighting, reflection, refraction. You're much less likely to notice approximations there than in geometry. Heck, conventional game engines are full of hacks for approximating those sorts things with a lot less accuracy than what they're proposing.
 
Last edited:
No, this isn't about compressing geometry - it's for approximating lighting, reflection, refraction. You're much less likely to notice approximations there than in geometry. Heck, conventional game engines are full of hacks for approximating those sorts things with a lot less accuracy than what they're proposing.
The only place I'm seeing the 70-95% compression is in this page

And taking the only section where the value is mentioned is in here, which talks about geometry.

A versatile toolbox for real-time graphics ready to be unpacked​

One of the oldest challenges and a holy grail of rendering is a method with constant rendering cost per pixel, regardless of the underlying scene complexity. While GPUs have allowed for acceleration and efficiency improvements in many parts of rendering algorithms, their limited amount of onboard memory can limit practical rendering of complex scenes, making compact high-detail representations increasingly important.

With our work, we expand the toolbox for compressed storage of high-detail geometry and appearances that seamlessly integrate with classic computer graphics algorithms. We note that neural rendering techniques typically fulfill the goal of uniform convergence, uniform computation, and uniform storage across a wide range of visual complexity and scales. With their good aggregation capabilities, they avoid both unnecessary sampling in pixels with simple content and excessive latency for complex pixels, containing, for example, vegetation or hair. Previously, this uniform cost lay outside practical ranges for scalable real-time graphics. With our neural level of detail representations, we achieve compression rates of 70–95% compared to classic source representations, while also improving quality over previous work, at interactive to real-time framerates with increasing distance at which the LoD technique is applied.​
If AI is in play here, then this tells me they're using it to reduce a complex model into just enough data that an AI can use and fill in the gaps as needed. So sure, this could imply lossy compression since the AI may never generate an exact copy of an original, ground truth model. But at the same time, if this is used to create other kinds of geometry in the same class (e.g., a tree) by simply tweaking some parameters, then is it really data (de)compression at that point?

I mean, I can run Stable Diffusion to create way more data than the 5-6GB or so model used to run it.

Or more to generalize this even further, if I have an algorithm that takes a 32-bit seed and spits out a 256x256 maze, and assuming each cell is a byte, is this a 16384:1 compression system?
 
Or more to generalize this even further, if I have an algorithm that takes a 32-bit seed and spits out a 256x256 maze, and assuming each cell is a byte, is this a 16384:1 compression system?
No, but if you get a 256x256 maze and can find a 32-bit seed that generates it, then it's 16384:1 compression.

I'm not sure what you misunderstand, because "just enough data that an AI can use and fill in the gaps as needed" is pretty much on spot for AI compression. It doesn't imply lossy at all (even though that could be the case). You could have the AI regenerate the data and keep in addition the exact difference from the source (possibly compressed in another manner) and then get an exact copy. You could of course not correct and use what the AI generated, and then you'd indeed get something lossy, just a lot more efficient for the quality than other compression methods.
 
Yup, I don't understand the underlying tech. I'm glad Intel is doing what AMD promissed, ray tracing for everyone. (is it correct here? I don't know the difference between ray/path tracing)
 
The only place I'm seeing the 70-95% compression is in this page

And taking the only section where the value is mentioned is in here, which talks about geometry.
Thanks. I can't be sure, but I get the sense they're referring to improved sampling or LoD-selection heuristics. Still not exactly dealing in compression of the actual geometry data, but I could be wrong about that.
 
90p resolution at 2 fps raytrasing!
🤣
Well maybe little bit faster than that.
And in reality AMD has put raytraising to mobiler chips allready with co-op with Samsung. Not good, but it is there…
 
90p resolution at 2 fps raytrasing!
🤣
Well maybe little bit faster than that.
No, they're talking about making it fast enough that people will actually want to use it.

And in reality AMD has put raytraising to mobiler chips allready with co-op with Samsung. Not good, but it is there…
Qualcomm now has it in the latest Adreno iGPUs. I'm pretty sure I read that Apple now has it in hardware, as well.