[SOLVED] Using an Older GPU for CUDA (ML.NET)

bejhan

Prominent
Feb 22, 2021
8
0
520
Background: I've recently started playing around with ML.NET to train a machine learning model to classify images. I do not currently have a dedicated GPU (I just use onboard graphics with my Intel i5-10500) so I've been restricted to CPU training of the model. This was fine at first but as my training set grows, my CPU is pinned for longer and longer. This interferes with other CPU-intensive tasks so I've been considering purchasing a dedicated GPU so I can utilize GPU (CUDA) training of the model.

Question: I can't justify spending a lot of money for my little experiment so I've just been looking at cheap GPUs on Kijiji that support CUDA (as per Nvidia's list).

All of the following are available for reasonably low prices but I'm wondering whether A) they'll be able to support the current version of CUDA and B) they'll actually be able to train in a timely manner:
  • Nvidia Quadro 4000 - $40 CAD
  • Gigabyte GTX 460 - $40 CAD
  • Nvidia Quadro 4000 - $60 CAD
  • Asus Nvidia Geforce GTX 560 Ti -$60 CAD

Thanks in advance.
 
Unfamiliar with ML.NET myself, but from what little experience I had toying with Tensorflow and other CUDA applications: Note that different cards have different CUDA compute capability, and the current versions of your toolchain may have a minimum (and for that matter, maximum) requirement. The same thing goes with driver and associated CUDA version, as all of those are now long out of support.

Your model would also need to fit on the GPU's VRAM to train on the GPU.

I'd recommend checking your toolchain's documentation.