Question Why is a traditional RAMDisk faster than a GPU RAMDisk?

Genralkidd

Distinguished
Apr 18, 2013
268
3
18,795
I tried out the GPURamDrive software by prsyahmi on GitHub and I created a 5 GB RAM Drive using my nVidia RTX 2060's GDDR6 RAM. I also later created a 4 GB RAM Drive using AMD's Radeon RamDisk software. Using CrystalDiskMark6, I ran benchmarks on both RAM Drives as well as my main Samsung 850 EVO SSD. The results surprised me, the GPU RamDisk did indeed have blazing fast sequential read/write speed but the Samsung SSD actually outperformed the GPU RamDisk in other tests by quite a bit too. And then compared to a traditional RamDisk using the system's DDR4 memory, it completely blew the GPU RamDisk out of the water.

Isn't GDDR6 and even the older GDDR5 memory used in GPU's supposed to be significantly faster than DDR4 RAM? And for that matter also significantly faster than flash memory? Is it a software issue? Or is there something about GDDR6 RAM than makes it inherently inferior to DRAM when used for RAM Disks?

These were the results of the benchmarks:

RTX 2060 GDDR6 RamDisk:

C6Evd.png


DDR4 RamDisk:
cVdQx.png


Samsung 850 EVO SSD:

LvOCC.png
 

kanewolf

Titan
Moderator
Your GPU is still connected by PCIe. That has overhead. It has a 16 lane wide bus. Who knows how well written the code you are using to create the GPU RAM disk is.
The processor on the graphics card has a much wider bus to access the memory than the width of the PCIe bus. That is where the speed improvements of GDDR really come in.
 

Genralkidd

Distinguished
Apr 18, 2013
268
3
18,795
Your GPU is still connected by PCIe. That has overhead. It has a 16 lane wide bus. Who knows how well written the code you are using to create the GPU RAM disk is.
The processor on the graphics card has a much wider bus to access the memory than the width of the PCIe bus. That is where the speed improvements of GDDR really come in.

So the PCIe bus is potentially bottlenecking the GPU RAM disk? I thought we still haven't hit the theoretical limits of PCIe 3 though. And what about the GPU's performance when compared to my standard SATA 3 SSD? The sequential read/write is significantly better on the GPU but the SSD is still significantly better for the 4K read/writes.
 

kanewolf

Titan
Moderator
So the PCIe bus is potentially bottlenecking the GPU RAM disk? I thought we still haven't hit the theoretical limits of PCIe 3 though. And what about the GPU's performance when compared to my standard SATA 3 SSD? The sequential read/write is significantly better on the GPU but the SSD is still significantly better for the 4K read/writes.
Without analyzing the source code on GitHub (which I don't care to do... ) there is no easy answer. If you notice the Q1T1 numbers between the GPU and the SSD are about the same. So my guess is that the GPU code is not very efficient with many simultaneous transactions Q32.
 

Genralkidd

Distinguished
Apr 18, 2013
268
3
18,795
Without analyzing the source code on GitHub (which I don't care to do... ) there is no easy answer. If you notice the Q1T1 numbers between the GPU and the SSD are about the same. So my guess is that the GPU code is not very efficient with many simultaneous transactions Q32.

Ah ok gotcha, software side bottleneck does make more sense. I just wasn't sure if GPU's or GDDR6 memory were in some way inherently not capable of performing those kinds of functions very well but it makes more sense that it's lower efficiency with the GPU code.