News Nvidia RTX 5060 Ti final specs and launch day allegedly leaked

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I remember this video demonstrating the problems with 8gb vs 16gb cards.

Oh you can absolutely play with settings or mods to require more VRAM. But from a fresh vanilla install those are the results.

As for that video, people really really do not understand how modern graphics frameworks operate. I can not stress this enough, we now treat VRAM as cache. Just because a program loads 10GB of data into system RAM does not mean we need 10GB of L3 cache. Having 10GB of L3 cache and fitting the entire data set into it would be wonderful for performance, see the AMD 9800X3D.

Textures not rendering has nothing to do with VRAM, that is a graphics engine bug, likely as a result of some developer trying to tune something to prevent a potential performance hit. It appears to be an issue with async shader compilation interacting with a delayed render from the pipeline being paused, instead of waiting the game just seems to skip the whole thing and move on.

Resources are always loaded into a part of system memory called Display Memory or Shared Memory depending on who's asking. After this the game engine has done it's job and now its up to the framework (DX11/12/Vulkan) to preload that resource into VRAM. If it does it's job then assets are always there and a frame renders properly every time. If it misses something, then it needs to pause the rendering pipeline and transfer that resource from system memory to VRAM over the memory and PCIe bus, once received it resumes rendering that frame. From a users point of view this will be observable as a stutter or high frame latency. Every now and then it's not a big deal, but if it's constantly having to do this then it becomes very noticeable.

German website that does a good job of illustrating this. Then used high res pack mods and settings (DLSS/FG/RT) to demonstrate the difference between the cards.


CP2077_FHD_DLAA-pcgh.png


Diablo 4 really highlights this happening.

Diablo-4_FHD_DLAA-pcgh.png


You can see those massive stutters as the VRAM gets full and the graphics framework has to dump and load from system memory.

This all centers on the word "need". You absolutely do not "need" more then 8GB of VRAM for mid tier 1080p or possibly even 1440p gaming. Having more then 8GB of VRAM does allow someone to use DLSS / FG / RT along with 4K textures (high resolution), which is stuff you would normally associate with a 70 model or higher card. People may want more then 8GB to use the new shiny stuff, and that is perfectly fine.

Now the 50 series cards might change this as the memory bandwidth increase is pretty large and DLSS/MFG/RT might not be such performance destroyers as they were on older cards.
 
Last edited:
Now on the whole "missing textures!!!" thing, there is another way this could happen and it's kind of a hacky rendering technique.

Now when a rendering pipeline hits a missing resource, it's supposed to stop and wait for that resource before proceeding to ensure everything renders correctly. But sometimes a mostly correct frame now is better then a completely correct frame later. This means a game engine can opt to skip a slow loading resource and move on with rendering the rest of the frame with the idea that it'll catch it when it's available. Now because your still rendering and didn't stop, the missing resource will take longer to load then if you had paused the pipeline, but the user won't notice unless they are looking for it. Having said that, this technique results in slow resources taking one or two hundred milliseconds to load, not missing entirely.

Like I said it's kinda hacky, but it makes sense if you want to have the frame pacing as smooth as possible, even if it means you have to occasionally skip a slow loading resource.
 
You just had to drop $50 more to get the 9070XT and get 5-6x the performance.
It's not quite that simple. I only have a 550W PSU, so going from 250W to 300W could have been a problem (i.e. an additional PSU upgrade expense). Also I have not seen a single benchmark that suggests that the 9070XT is 75-100% (double, really?) faster than an RTX 5070. Please provide some links.