News DirectX 12 Update Allows CPU and GPU to Access VRAM Simultaneously

RichardtST

Notable
May 17, 2022
235
263
960
But can I use the ridiculous amount of VRAM as regular RAM when I'm not playing games? Faster is faster.... Why should only games get to benefit?
 

waltc3

Reputable
Aug 4, 2019
420
223
5,060
Generally, the way this works in games is that the GPU loads texture data to be stored in Vram in the background and in advance of when it will be needed while a game is being played/run. This is a normal part of the game-engine programmed mechanics. I don't see this feature making a lot of difference in game play, accordingly. The primary reason that GPUs sport as much Vram as they do is because it is so much faster to texture from than system ram (which is much slower)--and of course much faster than any disks in the system. All this is done transparently so that the game player doesn't notice when playing. Some game engines are better than others at it, but that falls to differences in the quality of the game programming as well as differences in the quality of the various game engines used, textures, resolutions, etc.
 

waltc3

Reputable
Aug 4, 2019
420
223
5,060
But can I use the ridiculous amount of VRAM as regular RAM when I'm not playing games? Faster is faster.... Why should only games get to benefit?

Yes, provided the non-gaming application supports D3d12 and then directly supports this particular D3d12 feature, and that will depend on how well the drivers support it in practice.
 

DerKeyser

Prominent
Oct 15, 2021
7
22
515
Except they are producing something, they produced a service for bitcoin transaction data. The exact same thing that your bank does every time money is moved.
The only difference is bitcoin is a singular worldwide currency while your others have transaction fees to interchange and move.

Ehhm, I don’t know where the writer of this article had his lunch, but it must have been bad.
The GPU VRAM may be very very fast, but access of memory across the PCIe bus - even if PCIe 5.0 - is DEFINITIVELY not. Much much worse latency than DRAM on the CPU’s own memory controller.
So treat this article as clickbait in terms of using VRAM as faster ram than DRAM from your CPU.
 

ROB_DF_MX

Distinguished
Jul 8, 2015
18
0
18,510
That is a great feature to have and good news in general for the modern PC performance but only useful if your motherboard vendor cares about implementing the "Resizable-Bar" feature in BIOS.
In my poor case, GIGABYTE never care to update the BIOS on my "Gigabyte X399 AORUS Gaming 7 Motherboard", ASUS also never did that on the same X399 platform but MSI and ASROCK did it well.
So, go figure my system has a 2nd generation AMD Threadripper 2970WX, Gigabyte X399 AORUS Gaming 7 Motherboard, 256GB of RAM, PNY GeForce RTX 4080 and NO RESIZABLE BAR !!!

There are no possible way to request this to the motherboard vendors ?
Many of them are so LAZY and once they sold the mobo they just do not care !!!
 

Amdlova

Distinguished
That is a great feature to have and good news in general for the modern PC performance but only useful if your motherboard vendor cares about implementing the "Resizable-Bar" feature in BIOS.
In my poor case, GIGABYTE never care to update the BIOS on my "Gigabyte X399 AORUS Gaming 7 Motherboard", ASUS also never did that on the same X399 platform but MSI and ASROCK did it well.
So, go figure my system has a 2nd generation AMD Threadripper 2970WX, Gigabyte X399 AORUS Gaming 7 Motherboard, 256GB of RAM, PNY GeForce RTX 4080 and NO RESIZABLE BAR !!!

There are no possible way to request this to the motherboard vendors ?
Many of them are so LAZY and once they sold the mobo they just do not care !!!

My x99 asus has moded bios with rebar :)
 

blppt

Distinguished
Jun 6, 2008
568
89
19,060
Ehhm, I don’t know where the writer of this article had his lunch, but it must have been bad.
The GPU VRAM may be very very fast, but access of memory across the PCIe bus - even if PCIe 5.0 - is DEFINITIVELY not. Much much worse latency than DRAM on the CPU’s own memory controller.
So treat this article as clickbait in terms of using VRAM as faster ram than DRAM from your CPU.

Was just going to post this. Nicely put.
 

razor512

Distinguished
Jun 16, 2007
2,127
68
19,890
But can I use the ridiculous amount of VRAM as regular RAM when I'm not playing games? Faster is faster.... Why should only games get to benefit?

If it can be accessed like that, it would be limited to the speed of the PCI express bus.
16GB/s for PCIe 3.0, 32GB/s for PCIe 4.0, 64GB/s for PCIe 5.0

At PCIe 5.0 speeds, an video card will perform fairly close to that of DDR4 3600 RAM, but with extremely loose timings and high latency.
 

razor512

Distinguished
Jun 16, 2007
2,127
68
19,890
One thing that I would like to see more user control over how the videocard managed and reports VRAM, While the usefulness for gaming will be questionable since game devs can shoose whether to use shared memory or not (where if they choose not to, a game can either crash or fail to properly load in some textures, but maintain good frame rates, or use the shared memory and take a 50% or so performance hit.
The reason why I want more ocntrol is that there are video cards such as the GTX 970 where even though the 512MB secondary pool is slow (runs at around 30GB/s), a game that is set to not use shared memory, will still be able to use that extra 512MB pool in a way that is largely transparent to the game.

With that in mind, what if a video card maker added a driver feature that could take that concept and trick an application into thinking the video card had 16, 32, or even 64GB extra dedicated VRAM, by allocating system RAM and having it masquerade as dedicated VRAM but managed like how the nvidia drivers manages the 2 memory pools of the GTX 970.

While it would not help much for games, it would help greatly with other GPU compute tasks, and AI tasks that are set to not use system RAM for performance reasons.
The benefit if they were to add this function is that some AI frame interpolaters for super simulated super slow motion (e.g., some current ones require over 20GB of VRAM to process a 4K video), where if you have a card like a RTX 3070 or 3080, then frame interpolation on 4K would simply fail, and you would be forced to use lower res footage and output. But if you don't mind the process taking significantly longer, then forcing it to use system memory as additional VRAM (against the wishes of the developer), could allow those cards to complete such tasks slowly instead of not completing them at all.
 

hannibal

Distinguished
Good Nvidia can make GPUs with 2 and 4Gb of memory now ;)

In reality the point is that when GPU needs data from system memory, it does not have to wait the CPU to get the data first and vice versa!
 
Last edited:

FunSurfer

Distinguished
"Graphics card memory sizes and video game VRAM consumption are getting larger and larger every year "
Wake up Nvidia, gamers need 16GB and even 32GB (256bit) mainstream card
 

Xajel

Distinguished
Oct 22, 2006
167
8
18,685
But can I use the ridiculous amount of VRAM as regular RAM when I'm not playing games? Faster is faster.... Why should only games get to benefit?

I think this can be done, assuming the application is starving for more memory than system RAM with good bandwidth, but not latency as CPU<>VRAM latency will be high, bandwidth difference is not bad:
  • PCIe 5.0 x16 = 64GB/s
  • PCIe 4.0 x16 = 32GB/s.

    For comparison:
  • DDR4-3200 = 51.2GB/s
  • DDR4-3600 = 57.6GB/s
  • DDR5-5600 = 69.2GB/s
  • DDR5-6400 = 74.55GB/s

Some GPGPU apps might as well benefit from this feature.
 
I think this can be done, assuming the application is starving for more memory than system RAM with good bandwidth, but not latency as CPU<>VRAM latency will be high, bandwidth difference is not bad:
  • PCIe 5.0 x16 = 64GB/s
  • PCIe 4.0 x16 = 32GB/s.

    For comparison:
  • DDR4-3200 = 51.2GB/s
  • DDR4-3600 = 57.6GB/s
  • DDR5-5600 = 69.2GB/s
  • DDR5-6400 = 74.55GB/s
Some GPGPU apps might as well benefit from this feature.
there arent many pcie 5 gpus out there :p

but even then, i wouldnt put some critical data stuff there, as you get no ECC on GPUs unless you have something like quadro/tesla and such
 

DSzymborski

Titan
Moderator
"Graphics card memory sizes and video game VRAM consumption are getting larger and larger every year "
Wake up Nvidia, gamers need 16GB and even 32GB (256bit) mainstream card

Nobody needs 32 GB of VRAM for gaming. While use will naturally continue to increase, people exaggerate just how much VRAM is actually needed because an inability of many tools and analysts to differentiate between VRAM that is reserved and VRAM that is actually being utilized.
 
  • Like
Reactions: KyaraM

InvalidError

Titan
Moderator
Wake up Nvidia, gamers need 16GB and even 32GB (256bit) mainstream card
With GDDR6(X) chips maxing out at 2GB each, 256bits maxes out at 16GB and you aren't going to be seeing "mainstream" GPUs over 16GB until 3GB GDDR6(X)/7 chips come out.
Also, with Nvidia reserving 256bits for its $800+ GPUs, I don't think we can call those mainstream anymore.

GDDRx typically costs ~3X as much as its contemporary DDR counterpart, which would be about $6/GB right now. Putting 32GB on a GPU would cost ~$200 and price any such GPU well beyond what sane people would consider "mainstream" GPU budget. Memory pricing goes down by about 50% per decade, so we are 10+/-3 years away from mainstream GPUs having 32GB of VRAM assuming the need for such a thing ever arises.

In all likelihood though, the diminishing return curve beyond 16GB will get so steep that most game developers won't be able to justify the added effort, time and budget required to push GFX much further. We are already at the point where many reviewers are starting to have a hard time telling the difference between maximum quality everything and one or two notches below that without doing frame-by-frame examinations or knowing exactly what tell-tale signs to look at, where and when beforehand.
 
  • Like
Reactions: KyaraM

razor512

Distinguished
Jun 16, 2007
2,127
68
19,890
Nobody needs 32 GB of VRAM for gaming. While use will naturally continue to increase, people exaggerate just how much VRAM is actually needed because an inability of many tools and analysts to differentiate between VRAM that is reserved and VRAM that is actually being utilized.

While it is difficult to tell the amount of VRAM actively being used rather than being allocated, there are many signs that can be inferenced. For example, setting a game that needs lots of VRAM up where the card can offer playable framerates. PS for the LCD display on the keyboard (Logitech G510), ignore the MB/s readings on the RAM line, I had the network activity stats moved to that line when the last line is in use to display frame rates, since it seems to treat different render types as completely separate entries, thus leading to weird behavior if combined with other stats.
LH0oy4q.jpg


This was done with a GTX 970 so the memory usage behavior is a bit weird. Games that need to actively use up to 4GB of VRAM, will begin to allocate that last 512MB pool of VRAM, but if a game begins to need much more than 4GB, then you will see the card gradually drop to 3.5GB of VRAM allocated, and then the rest shifting to system memory.

ziGZf8F.jpg


The end result will be a massive performance hit and a power consumption drop on the GPU as it waits on the slow system memory. Furthermore, with some experimenting of closing background tasks, it makes no difference to the performance hit. Anyway, while I can't tell with 100% certainty how much VRAM is actively being used at any given moment, I can be 100% sure that it is using more than 4GB actively.

mGGYlrm.jpg
 

DSzymborski

Titan
Moderator
While it is difficult to tell the amount of VRAM actively being used rather than being allocated, there are many signs that can be inferenced. For example, setting a game that needs lots of VRAM up where the card can offer playable framerates. PS for the LCD display on the keyboard (Logitech G510), ignore the MB/s readings on the RAM line, I had the network activity stats moved to that line when the last line is in use to display frame rates, since it seems to treat different render types as completely separate entries, thus leading to weird behavior if combined with other stats.

Why infer when we have data? The proof of the pudding is in the eating. We have floods and floods of benchmarks that include competitive cards with lower VRAM and we've yet to see any significant bifurcation between the 8 or 10 GB VRAM cards vs, 16 GB VRAM cards when it comes to the FPS deltas when increasing resolution and quality. We've only just seen the Nvidia 8/10 GB VRAM GPUs drop off slightly vis-à-vis the 16 GB AMD GPUs in edge cases like Hogwarts Legacy at 4K everything maxed and full RTX.

If we were actually hitting a significant gameplay wall at these VRAM amounts, then the benchmark presentations would be different. You'd see a significant reordering of the relative performance ranks at marginal increases as VRAM amount's predictive value increased and processing power's predictive value decreased. But we're only seeing a very minor effect.

It's the same thing every generation. The GTX 970's VRAM is certainly disappointing, especially as the last half a gig was nerfed, but the GTX 970 didn't start dropping off significantly relative to the RX 480 until it was well past its prime.

Yes, VRAM is an issue, but it's a drastically overrated one. By the time 10 GB of VRAM against 16 GB of VRAM actually matters in a practical sense, which it doesn't yet, today's 10 GB Nvidia cards and 16 GB AMD card will be mediocre 1080p GPUs and people will be fighting about how they'd never buy so-and-so's 24 GB GPU because it's crippled compared to someone else's identically powered GPU with 32 GB of VRAM.
 
  • Like
Reactions: KyaraM

InvalidError

Titan
Moderator
We've only just seen the Nvidia 8/10 GB VRAM GPUs drop off slightly vis-à-vis the 16 GB AMD GPUs in edge cases like Hogwarts Legacy at 4K everything maxed and full RTX.
Last of Us pt1 also likes having 10-16GB of VRAM. Those "edge cases" will get increasingly more common for people who insist on maxing everything out.

The difference between having a GPU with more VRAM than it can currently make much use of in current games vs a faster GPU that has barely enough VRAM to run today's games is that once you push more VRAM usage on the GPU that actually has VRAM to spare, performance on the GPU with more VRAM will trail off proportionally with compute load per frame increase whereas performance on the "faster" GPU that has run out of VRAM will drop straight off a cliff from having to rely on system memory to cover the deficit unless you dial down details enough to avoid exceeding VRAM capacity.

You end up with a 12GB GPU (ex.: RX6700) that may average 61 fps with 55 fps lows vs a technically much faster 8GB GPU (ex.: RTX3070) averaging only 55fps with 18 fps lows. The 12GB GPU may not be setting world records but can still be considered quite playable without compromising visual quality while the "faster" 8GB GPU can be outright nauseous to play on without turning details down due to running out of VRAM. Those are TPU's actual Last of Us numbers.

Anything from the 3070 up is simply too powerful to have less than 12GB without an extremely high probability of being forced into early retirement due to running out of VRAM.