I have a 20k budget to purchase a GPU cluster for my educational institution lab for deep learning with pytorch and Tensorflow. What is the way to get the most bang for my buck (in terms of relative cpu cycles possible for the entire gpu cluster)? I can add ram and drives myself, but ideally the cluster would be bought ready to go with Cuda installed already and all the GPUs accessible via a built in cluster software like conda. This will be used by no more than ten people and usually just a few people, but is intended to last for a 5-10 years.