A versatile toolbox for real-time graphics ready to be unpacked
One of the oldest challenges and a holy grail of rendering is a method with constant rendering cost per pixel, regardless of the underlying scene complexity. While GPUs have allowed for acceleration and efficiency improvements in many parts of rendering algorithms, their limited amount of onboard memory can limit practical rendering of complex scenes, making compact high-detail representations increasingly important.
With our work, we expand the toolbox for compressed storage of high-detail geometry and appearances that seamlessly integrate with classic computer graphics algorithms. We note that neural rendering techniques typically fulfill the goal of uniform convergence, uniform computation, and uniform storage across a wide range of visual complexity and scales. With their good aggregation capabilities, they avoid both unnecessary sampling in pixels with simple content and excessive latency for complex pixels, containing, for example, vegetation or hair. Previously, this uniform cost lay outside practical ranges for scalable real-time graphics. With our neural level of detail representations, we achieve compression rates of 70–95% compared to classic source representations, while also improving quality over previous work, at interactive to real-time framerates with increasing distance at which the LoD technique is applied.