[SOLVED] Vram bottleneck

Jan 10, 2022
27
3
35
so, hello everyone. my current biggest issue in most games is vram bottleneck I have a ASUS STRIX GTX 960 2gb and the fact that I have 8bg ddr3 does not help. I play at 1080p most games but there is a lot a sttuters due to the vram being maxed out. Im thinking about buying a cheap used 720p native monitor to reduce the vram usage, and therefore I would like to know if playing at 720p instead of 1080p will have a significant reduction of vram used.
 
Solution
Why not try it? 720p on a 1080p monitor is going to look a little worse than on a 720p monitor (if you can find one). However it will show you how the games will run. Although the amount of vram needed will go down it may not be that much.
Jan 10, 2022
27
3
35
Why not try it? 720p on a 1080p monitor is going to look a little worse than on a 720p monitor (if you can find one). However it will show you how the games will run. Although the amount of vram needed will go down it may not be that much.
Yeah you're right and I did but in some games does not makes a difference in the vram msi overlay, meaby the game is still using more tha 2gb even at 720p and thats why I dont see it change?
 

Karadjgne

Titan
Ambassador
If you figure there's enough details in AC Valhalla to use up 4-5Gb of vram on a bigger card, even moving down to 720p won't do much of anything. It's not just a matter of resolution, it's what's In that resolution. Minecraft for instance, doesn't have the high amounts of shadows, shading, fine details, life-like faces and round edges of AC Valhalla, so 720p or 4k will act roughly the same.

Vram is concerned with how much info is in the frame, resolution doesn't really change the info, just changes the amount of pixels to supply the info to.
 
Vram is concerned with how much info is in the frame, resolution doesn't really change the info, just changes the amount of pixels to supply the info to.
Output resolution can affect VRAM consumption, especially when the rendering method (like deferred shading) or graphical features (like screen space reflections) makes heavy use of intermediate frame buffers, as the intermediate frame buffer is the same size as the output resolution.
 

Karadjgne

Titan
Ambassador
In that sense, yes it does. As do DSR resolutions that are later down-scaled to fit a smaller resolution.

But that difference pales in comparison to image detail. A plain-Jane field of wheat blowing around in the breeze contains a massive amount of info, every foreground stalk, every shadow, every shade, every motion variable, every dimension per frame. Doesn't matter 1080p or 4k, it's still massive vram usage. Vs a minecraft blocky, solid color, no shades, basic shadows, almost 2D like appearance, even 4k doesn't use as much vram as that 1080p field of wheat.

It's why Cyberpunk2077 is so brutal. The sheer amount of photo realism, shadows, lighting affects, reflections and objects kills vram. Coupled with its standard U-Play lack of decent optimization, AC series isn't any better. I played I, II, III on a 3770k at 4.9GHz with a 24% OC on a gtx970 and while I enjoyed the story lines, the actual game play was far from fluid. Anything above Low just felt clunky. At a realistic 3.5Gb of vram.
 
But that difference pales in comparison to image detail. A plain-Jane field of wheat blowing around in the breeze contains a massive amount of info, every foreground stalk, every shadow, every shade, every motion variable, every dimension per frame. Doesn't matter 1080p or 4k, it's still massive vram usage. Vs a minecraft blocky, solid color, no shades, basic shadows, almost 2D like appearance, even 4k doesn't use as much vram as that 1080p field of wheat.

It's why Cyberpunk2077 is so brutal. The sheer amount of photo realism, shadows, lighting affects, reflections and objects kills vram
And I would argue that wheat field is a poor example. Modern techniques would limit the number of unique wheat stalk models to a handful and every instance of a wheat stalk simply references one of those models, with maybe some procedural generation to make them a little more unique. Everything else except shadows would be procedurally generated and the result stored in the frame buffer. The only thing that would actually take up more VRAM here is the shadows since that requires another render pass and stored in an intermediate frame buffer.

And although it's almost a decade old, though I haven't seen any other presentation with this amount of information, in Guerilla Game's Killzone Shadow Falls demo post-mortem, there's a slide that how much data in VRAM was used by some part of the rendering process:
  • Non streaming textures: 1321MB
  • Render targets: 800MB
  • Data pool for streaming textures: 572MB
  • Meshes: 315MB
  • Heaps/buffers: 64MB
Render targets are the "intermediate frame buffers" I was talking about, which for this game was targeting a rendering solution of 1080p (which to catch myself from a previous post, "output resolution" was the incorrect term). 3D models also aren't that large, despite the developers noting that character models can be have up to 40K polygons and the example level geometry looks at least an order of magnitude more complicated. The miscellaneous data store (heaps/buffers) tells me that the rendering pipeline is mostly functional; it doesn't need to retain or modify much state about the graphics itself.

And if changing the resolution had no real affect on VRAM consumption, then it doesn't explain this:
vram.png


The same quality presets were used with each resolution. Nothing is getting more complicated, nothing is getting more detail except the number of pixels that show up on screen. Assuming Cyberpunk 2077 uses a similar separation of VRAM data as in Killzone (I don't see why not since most renderers follow the same principles), the only thing that would change is the render target size... which is dependent on the rendering resolution. And to explain the bump in VRAM consumption with RT, it's for the BVH structures. RT itself doesn't need much memory to execute.

And this TweakTown article benchmarking Death Stranding noted that VRAM usage went down when DLSS was enabled, which uses a lower rendering resolution. And while I haven't found an article reporting a similar thing with FSR, Tom's Hardware noted that with the only 4GB card they tested, FSR vastly improved performance on Godfall compared to the other GPUs.

So I stand by my statement that rendering resolution has a direct impact on VRAM usage.
 
Last edited:

Karadjgne

Titan
Ambassador
I don't disagree, it does have an impact. But look at the numbers. First half 1080p 5.6Gb, 4k 7.1Gb. Second half, 1080p, 7.5Gb. 4k is exactly 4x as many pixels as 1080p, yet vram usage only went to 10Gb. That's not all that much of a change overall, only 21% and 25%. On a simple graphic like minecraft or roblix etc, the difference would be far less, closer to 15%.

But you are still talking about 7.5Gb worth of picture running through that vram at 1080p, which is huge.

Also, have to be careful with 3000 series cards and DLSS. They use direct rendering that bypasses the cpu, so some of what those cards see isn't a set of instructions from the cpu, but actual files the gpu then has to unpack and fully render itself, which does take up vram.