Granite Texture Streaming 101: An Overview From Graphine’s CEO

Status
Not open for further replies.

computerguy72

Distinguished
Sep 22, 2011
190
1
18,690
ARK Survival Evolved would seem to be the game that needs this middleware the most. Streaming all those repeating textures that composite player made buildings is the slowest part of the game.
 

bit_user

Polypheme
Ambassador
I'm not sold on the idea, here. In most cases, I think more sophisticated procedural textures are a better solution. In the sample image, above, what's really needed is a vector-based texture format (or something better at compressing line art).

Part of the problem with their solution is that even though they solve the problem of video memory, the assets still have to come from somewhere. That means more system RAM, more HDD/SSD space, and more network utilization (either in the form of bigger downloads or if they stream over the net, as mentioned). There's no free lunch, here.

Of course, I can believe there are exceptions, when nothing will work as well as a high-res PBR texture. If any game devs have any examples to share, please do.
 

michaelzehr

Distinguished
Sep 18, 2008
60
0
18,640
Good point, Bit_user. Though encapsulating texture fetching into a middleware could led to smart textures that might be vector graphics or a PBR, and the data is presented to the next layer in the best/fastest possible way, depending on circumstances.

It sounds related to smart, predictive cache algorithms. It's not really a surprise that some of these problem solutions have gone from "very complex caching logic to keep the working unit fed" to "put everything in memory" and back to the former (with side trips into "wait for moore's law to catch up"). In theory a smart middleware like this would behave differently on an HDD as on an SDD (in practice not caring about the technology itself, but observing how long it takes to load things, and making space/time tradeoff decisions based on that).
 

RomeoReject

Reputable
Jan 4, 2015
239
0
4,680
Based upon what was said about it, wouldn't offloading some of the work to the RAM and SSD be a great way to improve things compared to relying 100% on the GPU? Even a high end card these days still typically ends at around 8-12GB. In comparison, 16GB is basically the bare-minimum for RAM these days, and plenty of kits offer 32GB or more. Storing super-high resolution files and just using them as needed sounds like a really intelligent solution.
 

bit_user

Polypheme
Ambassador
Perhaps 16 GB is bare minimum for anyone doing 4k gaming. But I'd say 8 GB is still more common.

The thing is that system RAM is used for a lot besides textures. For the game, it needs to hold 3D models, AI, sounds, and datastructures needed for physics. Those are probably the main things, anyway.

But, you don't want a game consuming all your physical memory. The OS needs some, plus a few background apps will typically be running. And any unused RAM gets turned into disk cache. So, there's typically a performance rationale for leaving a bit of headroom.

Anyway, he wasn't talking about storing it exclusively in RAM. The article mentions streaming textures from HDD/SSD. So, the RAM is used like a cache/staging area. That said, it's still going to use up RAM and memory bandwidth vs. if you somehow didn't need to do any texture streaming.
 

sixto1972

Honorable
Oct 7, 2012
6
0
10,510
You must also realize it doesn't stream in all the textures at high resolution.Only the areas closest to the field of view are streamed in full resolution. Further distance from view are mip mapped textures of lower resolution. As you move closer to an area her resolution textures are streamed in as needed and textures that can not be seen or are out of view are culled from video ram using a predictive algorithm. The entire game scene is turned into a tileset that is optimized and indexed and stored in an optimized arrangement on disk. It is very similar to the megatexture engine in id softwares RAGE.
 

bit_user

Polypheme
Ambassador
I thought that was pretty clear, from the article (besides being obvious to anyone with a graphics background).

Of course, the down side of MIP mapping is that it comes at price. If you have the whole texture loaded in memory, the overhead is about 33% larger footprint. However, their texture streamer probably uses the same tile sizes for each level of detail. So, it might do a bit worse than that (assuming tri-linear interpolation or better). But, if you're using texture maps, that's a price worth paying, IMO.
 

d_kuhn

Distinguished
Mar 26, 2002
704
0
18,990
Procedural textures would be a good solution for some part of the application space... just like vector based approaches. The reality (of reality) is that it's not going to be solved by any one approach. These tools go into a bag of tricks that the developer can apply intelligently to solve their particular problems based on what they're trying to do. If they systems are well integrated then maybe the application deploys approaches based on the store and compute assets available.

Regarding the article, you'd think we'd learn in the computer world to "never say never". Applications with 8TB of texture data are only a couple orders of magnitude away... we'll DEFINITELY see them unless something drastic happens to alter the graphics engine landscape. Might be a few years but graphics have already progressed farther than we have left to go.
 

bit_user

Polypheme
Ambassador
Not sure about that. With Moore's law slowing, leading to a tapering of memory & flash capacities, and increasing focus on download vs. sales of physical media, I don't see how 8 TB is going to be practical to deliver or store. It would take me 7.4 days to download that @ 100 Mbps (or 17 hrs @ 1 Gbps - and I don't see any potential for mainstream internet access beyond that, in the next decade). Plus, ever faster GPUs are going to be more capable of using generating procedural textures than ever before.

For certain professional applications, GPUs are already accessing TB of data via AMD's SSG technology. But those are for specialized applications involving GIS and volumetric datasets.
 

sixto1972

Honorable
Oct 7, 2012
6
0
10,510
From Bit_user

"I thought that was pretty clear, from the article (besides being obvious to anyone with a graphics background).

Of course, the down side of MIP mapping is that it comes at price. If you have the whole texture loaded in memory, the overhead is about 33% larger footprint. However, their texture streamer probably uses the same tile sizes for each level of detail. So, it might do a bit worse than that (assuming tri-linear interpolation or better). But, if you're using texture maps, that's a price worth paying, IMO."



You are right. In retrospect I added no clarity to the subject. Thankyou.
 

bit_user

Polypheme
Ambassador
No need to apologize. I do stuff like that, now and then. And maybe your post helped clarify the tech, for others.
 

wifiburger

Distinguished
Feb 21, 2016
613
106
19,190
I don't get this and wouldn't understand any developers playing using this tech !
the most garbage or high end engine is already coded not to load everything that is not visible and already adjust memory of textures / polygons depending on draw distance,

just saying i'm no expert ! sound like a hoax to me !
 

bit_user

Polypheme
Ambassador
True, but I think their selling point is that they divide textures into tiles, and predictively fetch those tiles not just from main memory, but also handle streaming them from the HDD/SSD.

I'm sure developers wouldn't buy it, if it didn't offer a benefit. Their claims are easily testable (by developers), so it's not like snake oil.

My only beef is that it feels like a crutch for developers to take the lazy way out, by using megatextures when they should be using better shaders. But I'm not a game dev, so I'm only speculating.
 

blackbit75

Distinguished
Oct 10, 2010
49
0
18,530
BIT_USER. I always thought mathematical textures would be the best way to advance in computer graphics (and raytracing). I didn't know about 'procedural textures'. It seems what I was hoping for. One of the best things about this way of producing textures is the capability to give infinite detail. Actual textures are made for a certain distance of point of view. Being near the object despite having a 12GB GPU, are sticking us to blurry image objects.
 

bit_user

Polypheme
Ambassador
Exactly. And it doesn't stop at just scale. They can compute pixel color based on several parameters, including the viewing & lighting angles.

https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch18.html

Not only can shaders compute the color of each pixel, but different types of shaders can be used to deform and instance scene geometry.

RenderMan is the procedural shading language Pixar developed in the 1980's.

https://en.wikipedia.org/wiki/RenderMan_Shading_Language

Traditionally, most of the textures you see in feature films are procedural shaders - not texture maps. (Although, the line can get a bit blurred, since procedural shaders can also reference images & multidimensional arrays.)

For realtime graphics OpenGL has GLSL and Direct 3D has HLSL.

https://en.wikipedia.org/wiki/OpenGL_Shading_Language
https://en.wikipedia.org/wiki/High-Level_Shading_Language

Vulkan also has shaders, but not its own shading language, per se. You can either use GLSL or any other tool that can compile the shader into a standard low-level machine language.
 
Status
Not open for further replies.

TRENDING THREADS