[SOLVED] AMD with Intel Fortran compiler?

Atterus

Distinguished
Jul 15, 2015
99
1
18,665
Hello all!

Given that the price and capabilities of AMD are nothing to ignore anymore when dealing with high performance computing... I'm wondering whether the next machine learning build I do should be a AMD core rather than the pricey Intel counterparts (ie, I currently have two 10 core Xeon processors that cost a ton, a new build would cost a ton too).

I'm eyeing the new Threadripper processors since those would offer both far more threads as well as a much higher clock speed. While Intel DOES have some CPUs that can compete, they are more pricey than the already pricey Threadripper, so I'm seriously looking at the AMD option this time around. The one problem I have is that the code we are using is heavily reliant on some FORTRAN 77 (style for compatibility issues we deal with, we use 95 where we can) code compiled in the Intel Compiler.

It may be a dumb question, but I'm not sure whether we can use this Intel code on a AMD processor. If not, that kinda answers my question, but if it is possible I'm sure my boss will be thrilled the cost estimate of the next number cruncher will be under 10k. But, maybe that's just wishful thinking.

Thanks!
 
Solution
I use the Intel FORTRAN compiler also and also do AI. Unless you use some specific cpu extension, it will not be a problem. However it wont he optimized as well. So that leaves out thongs like avx 512 which will really speed up your training phase. Intel will choose the lowest common denominator in their libraries when not using intel chips. They did this with matlab.

That said why arent you using vector libraries for amd and nvidia gpu's? Then you can have hundreds of threads training at once.

Mind if I ask what kind of AI you are doing? Image recognition? Nature based (ie: path traversal)? Simularity analysis? predictive?

I find FORTRAN to be too "clunky". And while I find C clunky too, it is at least more friendly...
While any x86/x64 code will work fine on the AMD platform, before investing such a large sum into a platform, I would do some research on if the specific compiler will perform the same on this platform or not. Otherwise, you may end up spending a lot and not getting the performance you want. There are actually companies that have this data available for purchase from their own internal test labs. I would think it would be a wise investment.
 
Hello all!

Given that the price and capabilities of AMD are nothing to ignore anymore when dealing with high performance computing... I'm wondering whether the next machine learning build I do should be a AMD core rather than the pricey Intel counterparts (ie, I currently have it 10 core Xeon processors that cost a ton, a new build would cost a ton too).

I'm eyeing the new Threadripper processors since those would offer both far more threads as well as a much higher clock speed. While Intel DOES have some CPUs that can compete, they are more pricey than the already pricey Threadripper, so I'm seriously looking at the AMD option this time around. The one problem I have is that the code we are using is heavily reliant on some FORTRAN 77 (style for compatibility issues we deal with, we use 95 where we can) code compiled in the Intel Compiler.

It may be a dumb question, but I'm not sure whether we can use this Intel code on a AMD processor. If not, that kinda answers my question, but if it is possible I'm sure my boss will be thrilled the cost estimate of the next number cruncher will be under 10k. But, maybe that's just wishful thinking.

Thanks!

I use the Intel FORTRAN compiler also and also do AI. Unless you use some specific cpu extension, it will not be a problem. However it wont he optimized as well. So that leaves out thongs like avx 512 which will really speed up your training phase. Intel will choose the lowest common denominator in their libraries when not using intel chips. They did this with matlab.

That said why arent you using vector libraries for amd and nvidia gpu's? Then you can have hundreds of threads training at once.

Mind if I ask what kind of AI you are doing? Image recognition? Nature based (ie: path traversal)? Simularity analysis? predictive?

I find FORTRAN to be too "clunky". And while I find C clunky too, it is at least more friendly from a programmer stand point. C is also closer to programmable shader languages.
 

kanewolf

Titan
Moderator
I use the Intel FORTRAN compiler also and also do AI. Unless you use some specific cpu extension, it will not be a problem. However it wont he optimized as well. So that leaves out thongs like avx 512 which will really speed up your training phase. Intel will choose the lowest common denominator in their libraries when not using intel chips. They did this with matlab.

That said why arent you using vector libraries for amd and nvidia gpu's? Then you can have hundreds of threads training at once.

Mind if I ask what kind of AI you are doing? Image recognition? Nature based (ie: path traversal)? Simularity analysis? predictive?

I find FORTRAN to be too "clunky". And while I find C clunky too, it is at least more friendly from a programmer stand point. C is also closer to programmable shader languages.
One of the big benefits of Intel hardware is the Intel Math Kernel Library. Read this post about using the Threadripper -- https://www.pugetsystems.com/labs/h...for-Python-Numpy-And-Other-Applications-1637/
The Puget Systems folks build lots of workstation hardware.

Also read this article about MatLab on AMD -- https://www.extremetech.com/computi...ss-matlab-cripple-amd-ryzen-threadripper-cpus
 
Solution

Atterus

Distinguished
Jul 15, 2015
99
1
18,665
Thank you all, sounds like I may need to do a bit more research both on the whole process and exactly what the code I'm using right now is using library wise. Unfortuantely, AVX512 sounds familiar but I'm hoping that's just from when I was puttering around trying to improve the code itself. Same with some of the mentions of MKL, so that's really helpful to know is a thing. I'd have to find out exactly how much of a hit there is in not using that optimization, if I'm using it already (not sure). I'll have to look into the resources linked. Thanks!

The main reason I'm not using a GPU is because the code we are using is very old at its core and rejiggering it for GPU would be a large task (I've tried poking around GPUs before, but it sounded like it would involve a lot of work, maybe I looked at the hard version though...). At least when I poked around it it seemed like I would need to recode stuff much like when I did a relatively minor change for the CPU multithreading (which still took forever to do). I can't go into a ton of details, but the current method is effectively doing a different training optimization on each thread, completing the training thousands of different ways, then breaking each classifier onto a thread to perform tests. This all came from a process that was designed for single core operations that originally took months to complete haha.

Thanks again! Research ahoy!
 
Efficiency can come from code improvements, but can also come by just brute force throwing faster hardware at it. If you look at where are at, it seems that most just use the brute force hardware method because it's cheaper in the end. But better coding may be more cost effective in your situation as it sounds like gpus could really do a lot of work.