Please help with multi-gpu workstation configuration for deep learning

Nov 25, 2018
1
0
10
Hi,

My team is buying workstations for deep learning, we want to be cost-efficient and minimalistic. After homework, we have finalized the below configuration for each machine. Can anyone please verify and add the missing details to it.



Workstation Configuration:

Processor: lntel® Xeon® Processor E5-2620 v4 (20M Cache, 2.10 GHz)
Chipset: Intel X299
Memory: 64 GB(4x16) RAM @2666 MHz Expandable Upto 128 GB with 8 Dimm Slots
Storage: Ix2 TB SSD. Ix 4 TB HDD SATA @7200 RPM
Gpu Support: Supports NVIDIA® 4-Way SLlTM Technology
GPU: 4xNvidia GTX 1080 Ti 11 GB GDDR5X GPU
Form Factor: Micro Tower


We have the following concerns:

1. Intel X299 chipset has 24 PCIe lanes (according to intel's website), will it be able to support 4 GPUs?
2. What motherboard should we use for the above configuration?
 
I'm going to throw something completely different out there for consideration. Cost efficient I think it will be. Minimalistic depends on how you build it out.
The Dell Precision T5500 is available surplus. It's LGA1366 single or dual CPU. It runs x5500/5600 Xeons. 3 channel DDR3 1333 RAM (per CPU) and 4 16x GPU slots. 36PCIe lanes (per CPU).
You might want to give some consideration to using Quadro GPUs, an/or Tesla PhysX cards due to their ECC memory on the GPUs, and Double Precision floating point capabilty. IDK what Deep Learning is, but gaming GPUs sacrifice some data integrity for performance gains.
You can play around with some configurations here.
http://www.ebaystores.com/PC-Servers-and-Parts-Inc_W0QQ_sasiZ1
The fastest CPU is the 4C/8T X5687, The faster 6C/12T are the X5680/5690, and the 95W X5675 is popular also. But there are dozens of other options.