I disagree. Asset pop is real and it's annoying. In open world games, more vram equals more render distance. Low vram means distant assets can't be loaded and ready. I'm sure we've all seen games that render at a great frame rate when your fov is static, but spin the view suddenly and it turns in to a stuttering mess of fuzzy/missing textures and assets while you wait for limited vram to be swapped around. That's an example of enough gpu, but not enough vram., and it was often the case for me on a 3080 10GB. Even though I had enough gpu, I had to turn down options to fit the vram. I moved up to a 3090 24GB, barely more gpu performance, but much smoother in some games.Bingo! And even scaling with resolution does not heavily impact memory requirements: Textures are textures, you will be loading the same textures regardless of render resolution at the same game settings (and if you scale back texture resolution - if that's even an option the game exposes - at higher render resolutions you reduce memory footprint!), or geometry, so its buffers that scale with render resolution. Take 1080p to 4k for example: buffer size quadruples, but even if we assume a good 10 buffers (for depth and normals and Z and diffuse and whatever other buffers your render pipeline involves) and 32bpp for each buffer, that goes from ~79MB at 1080p to ~316MB at 4k. Not a huge impact to total vRAM usage.
The vast majority of vRAM is not taken up by buffers or active-use textures and geometry, but by opportunistically cached textures and geometry from the rest of the level that is crammed into any spare vRAM and overwritten (with zero performance impact) if/when actual live data needs that space. That opportunistically cached data may never make its way on screen before being overwritten, but any good engine should be trying to cache it anyway when the PCIe bus is not otherwise occupied and there is spare vRAM, because there is zero penalty from doing so and it may have a small chance of avoiding a cache miss and memory or drive read later. As DirectStorage moves from something individual developers implement to a commonly available API, even that will become less of a necessity as access overheads from out-of-vRAM data are reduced.
When you see a game 'use' large quantities of vRAM, the amount used is almost always what the game has reached by running out of data to cache for the level/chunk loaded, not the amount of data it actually needs for rendering.
There isn't really anything wrong with that as long as the drivers can shuffle stuff between memory channels to keep the net load balanced. Out of 12GB worth of stuff in VRAM, I'm pretty sure the bulk of it would be perfectly fine at half-bandwidth.I wouldn't put it past nvidia to put 12GB on a 128 bit bus and just run uneven channels. We've seen that before.
I'm not basing the VRAM statements off of what utilities claim is used, but on actual real-world testing. Most games at 1080p are fine, a few at 1440p can exceed 8GB actual use, and a growing number are exceeding 8GB at 4K. From what I've seen, due to the number of buffers used, a game that uses perhaps 4GB of VRAM at 1080p and max settings will need just over 6GB of VRAM at 4K (Red Dead Redemption 2 and several other games that show approximate memory use follow this pattern). And a game that needs 6GB of VRAM at 1080p will need just over 8GB at 4K.Bingo! And even scaling with resolution does not heavily impact memory requirements: Textures are textures, you will be loading the same textures regardless of render resolution at the same game settings (and if you scale back texture resolution - if that's even an option the game exposes - at higher render resolutions you reduce memory footprint!), or geometry, so its buffers that scale with render resolution. Take 1080p to 4k for example: buffer size quadruples, but even if we assume a good 10 buffers (for depth and normals and Z and diffuse and whatever other buffers your render pipeline involves) and 32bpp for each buffer, that goes from ~79MB at 1080p to ~316MB at 4k. Not a huge impact to total vRAM usage.
The vast majority of vRAM is not taken up by buffers or active-use textures and geometry, but by opportunistically cached textures and geometry from the rest of the level that is crammed into any spare vRAM and overwritten (with zero performance impact) if/when actual live data needs that space. That opportunistically cached data may never make its way on screen before being overwritten, but any good engine should be trying to cache it anyway when the PCIe bus is not otherwise occupied and there is spare vRAM, because there is zero penalty from doing so and it may have a small chance of avoiding a cache miss and memory or drive read later. As DirectStorage moves from something individual developers implement to a commonly available API, even that will become less of a necessity as access overheads from out-of-vRAM data are reduced.
When you see a game 'use' large quantities of vRAM, the amount used is almost always what the game has reached by running out of data to cache for the level/chunk loaded, not the amount of data it actually needs for rendering.
Three years down the road all the AAA games will still be console ports from the current generation so what powers 70 native fps today probably will still power 60 native fps. My GTX980 lasted through a whole console generation or 4 PC GPU generations. When was the last AAA game that require the best and latest GPU just to run? My GPU upgrade decision going forward will be purely AI compute and CG rendering dependent.