Oh really? So you think that a 6700 XT is faster than a 3080 rather than the 3080 running out of VRAM?
Just game optimizations, nuff said. It's a very simple situation, just compare a 4060ti 8GB with a 4060ti 16GB. The 16GB is operating in clamshell mode so it has identical memory bandwidth and we get same performance in virtually all titles. Issues only start to appear when settings and resolutions are dialed up to max and by then your hitting very unenjoyable framerates.
Another way to do this requires some programming knowledge. You need to compile a CUDA program that will statically allocate VRAM, then keep it on a second screen or in the background to prevent WDDM from evicting it into system RAM. Then you can take a 4090 and test different titles with different amounts of available VRAM.
https://forums.developer.nvidia.com/t/need-a-little-tool-to-adjust-the-vram-size/32857/6
Use the version at the bottom, it keeps banging on the memory to prevent WDDM from evicting it.
Social media article writers don't really know how to do this stuff so we just get a per-card list of metrics without much reason for
why and
how the numbers are what they are. It gets pretty technical from here, if you want you can dig into how WDDM works and why "running out of VRAM" is a silly notion. Short answer is that you won't experience any performance issues unless the scene you are rendering requires more then total GPU VRAM in the middle of a frame. If that happens then you either render null and keep moving, or wait while the resource is loaded across the PCIe 4 bus at 30-32GB/s. This will necessitate evicting another resource out first, only you were kinda using that other resource so you'll have to swap it back in very soon and evict something else, and do this non-stop. That will create a stutter effect and is incredibly noticeable as both the FPS and frame times instantly go down the tube and it can quickly becomes unplayable. A good analogy is if you attempted to open a file in Adobe Premier that is larger then your total system RAM forcing Windows to have to start using the page file non-stop, things get very ugly very fast when that happens. It's WDDM not the game or the GPU driver that is responsible for keeping the GPU VRAM populated with resources it needs.
You can start reading up on the finer details of WDDM here, but warning it's not for the feint of heart.
https://learn.microsoft.com/en-us/w...ndows-vista-display-driver-model-design-guide
Essentially your argument boils down to suggesting that games need more then 12GB of VRAM to render a single frame because the PCIe bus isn't fast enough.