GPU Computing, CPU and RAM Requirements?

shmoo

Honorable
Oct 25, 2013
33
0
10,530
Hello all,

I'm about to start a new build that will be used for gaming and playing around with GPU computing (I'm a CS grad student who wants to play around at home with some different algos implemented on gpu's).

As far as GPU's go, I plan on waiting till the spring for volta to be released, at which point I'll be looking at volta GPU's, as well as possibly the pascal titan if the price drops a lot and it seems better for my needs. Not sure about single/sli configuration yet.

I'm a bit concerned about bottlenecks related to CPU and RAM. For the work I'll be doing, there will be a lot of reading and writing large datasets to and from the GPU(s), which may have up to 12 gigs a piece.

2 main points come up.

-I've heard from some that it's ideal to have at least as much system memory as GPU memory, so 2 12 gig GPU's would want at least 24 gigs of ram since information is loaded from SSD to RAM to GPU memory and back. Were I to have less RAM than GPU memory, I would have to do this is multiple batches in order to fill all of the GPU memory. On the flip side, more ram = slower performance for things like gaming. Does anybody have feedback/experience with this?

-CPU requirements: Will a 4-core CPU be a bottleneck for reading/writing between SSD's/RAM/GPU memory? Assume a max of 2 12-gig gpu's and an M.2 SSD with up to 3200 MBps read 1800 MBps write. I'd much rather stick to an LGA 1151 socket.

Is hyper-threading used for these sort of read/write operations? Not sure if an i7 would be a big difference here.

 

logainofhades

Titan
Moderator
Graphics memory should have no bearing on how much system ram you need. GPU rendering should use the GPU ram. Anything that the CPU works on, would use system ram. 32gb doesn't sound like a bad idea, in your case, though. More ram does not create slower performance. More ram typically makes things run faster, up to a certain point. Once you have more than you need, performance doesn't improve, but it doesn't get worse. Not enough ram is what will slow you down. I think getting an i7 would be a pretty good idea, for longevity, and as DX12 adoption becomes more common.
 

shmoo

Honorable
Oct 25, 2013
33
0
10,530


So while it's true that gpu ram and system ram can be though of as independent for rendering/gaming, it's different for many types of GPU computing. With gpu computing, you're constantly loading and unloading batches of data into the GPU. You write a batch of data that needs to be processed from the ssd into ram, then write the batch from ram into gpu memory. The gpu crunches the numbers, then you write the data from gpu memory into ram, write the data from ram back into ssd, Rinse and repeat with the next batch of data. When gaming, you typically write a bunch of data into the gpu and then just use that data for long periods of time (i.e. you load an environment and the gpu just renders it as your little dude moves around).

This reading and writing is EXTREMELY time consuming in many cases. With a lot of gpu computing, the data you load into the gpu gets used only once, and then the resulting data is sent back. There is so much i/o here that you typically want as much ram as gpu memory so you can fill all of the gpu ram in one batch, i.e. read from ssd into ram and then load all data into gpu, instead of reading in half the data to ram, loading it into the gpu, then reading the other half of the data into ram, and loading the rest into the gpu (this would happen if you have less ram than gpu memory).
 

shmoo

Honorable
Oct 25, 2013
33
0
10,530
EDIT: Found this blog article http://timdettmers.com/2015/03/09/deep-learning-hardware-guide/

It's a good read if you're curious about this sort of thing, but TLDR: You want at least as much ram as gpu memory, and most deep learning frameworks only make use of 1 cpu thread.