[SOLVED] Use system memory as dedicated gpu overhead?

Dmarkojr

Honorable
Jun 30, 2016
15
0
10,520
To start off, I know ddr3/4 runs at a much lower speed than Gddr5. I know that this low speed would result in stutters while in-game. I'm more interested in rendering, or blender to be more specific. My current setup is an i5 3470 (painfully slow in renders), gtx 970 3.5gb vram, 2x2 + 2x4GB 1333mhz ddr3, windows 10 pro, and an ssd.
I've seen alot of articles/forums stating it is possible to use regular memory as overhead on dedicated gpus, although they then say "it would stutter, so don't bother". I've yet to find a solution actually stating how. The only ones that did were for igpus. The other articles I've seen related to the topic say it is reserved to only enterprise level gpus.
Given that rendering (from my understanding) doesn't really rely on data transfer speeds, would this be beneficial? Is it even possible on my hardware? If not mine, what about some of the older titan/quadro/amd equivalents? Any suggestions are greatly appreciated, since I don't really have the money/skill to justify getting a new gpu with 8GB+ vram/cpu capable of rendering faster than a gt 710.
 
Solution
I thought when you exceed your vram amount the system uses memory to supplement.

In gaming it actually would use the SWAP file, better known as the Page File or Virtual Memory. Its why things start to stutter when gaming and you max out VRAM as a HDD is vastly slower than VRAM. Hell even the fastest NVMe SSD is vastly slower considering most decent GPUs these days exceed 200GB/s in memory bandwidth.

Thanks for the response. I checked into a few things based on that. Apparently blender added support in 2.8, although it seems to not operate all too well. The issue I keep running into seems to be common. Prior to "support" for the feature, you'd simply get an "out of vram" notification. With the feature added, most...

Dmarkojr

Honorable
Jun 30, 2016
15
0
10,520
I am not aware of any way a user can do this. However a program might be able to do it based on this:

https://developer.blender.org/T48651

Again I know how to increase it for iGPUs but have never seen the ability for a user to do so in Windows itself.
Thanks for the response. I checked into a few things based on that. Apparently blender added support in 2.8, although it seems to not operate all too well. The issue I keep running into seems to be common. Prior to "support" for the feature, you'd simply get an "out of vram" notification. With the feature added, most people have their renders crash upon exceeding vram usage. If you keep it within a small region (100ish MB?) you should be fine. Off-hand I'm not sure where exactly I'm hitting before a crash. https://blenderartists.org/t/out-of-core-rendering-for-cycles-no-more-vram-limits/1118479/16

At the same time, blender's official page mentions exceeding vram usage as a no go still: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html

If not in windows, what about other operating systems? It's been a while, but I've messed around with a few linux distros in the past (ubuntu, linux mint, and one other I can't remember off hand).
 
I thought when you exceed your vram amount the system uses memory to supplement.

In gaming it actually would use the SWAP file, better known as the Page File or Virtual Memory. Its why things start to stutter when gaming and you max out VRAM as a HDD is vastly slower than VRAM. Hell even the fastest NVMe SSD is vastly slower considering most decent GPUs these days exceed 200GB/s in memory bandwidth.

Thanks for the response. I checked into a few things based on that. Apparently blender added support in 2.8, although it seems to not operate all too well. The issue I keep running into seems to be common. Prior to "support" for the feature, you'd simply get an "out of vram" notification. With the feature added, most people have their renders crash upon exceeding vram usage. If you keep it within a small region (100ish MB?) you should be fine. Off-hand I'm not sure where exactly I'm hitting before a crash. https://blenderartists.org/t/out-of-core-rendering-for-cycles-no-more-vram-limits/1118479/16

At the same time, blender's official page mentions exceeding vram usage as a no go still: https://docs.blender.org/manual/en/latest/render/cycles/gpu_rendering.html

If not in windows, what about other operating systems? It's been a while, but I've messed around with a few linux distros in the past (ubuntu, linux mint, and one other I can't remember off hand).

It may not hurt to try a Linux distro but I can't say it would fare better. The issue with moving to system RAM is it would have to somehow write to it and then remember what went where. Its much like trying to use an iGPU and dGPU for rendering. It works with two of the same GPUs because it basically renders the same frame and then combines it but when you have something less powerful that can render the same frame at the same time it can cause all kinds of issue.

Now I am no software engineer so take what I say with a grain of salt but I do know that that sort of thing is very tricky to do in PCs and is why its not mainstream.
 
Solution