[SOLVED] Big data simulations - the best AMD socket to invest

chrismath

Distinguished
Mar 1, 2012
15
0
18,510
I am building a system for big data simulations and large scale statistical modelling and would like to hear opinions on what is the best route considering currently available hardware and the ability to upgrade in the future. For my statistical modelling I require high CPU and RAM capacity. Computations on GPU are not fully supported yet for the problems I'm trying to solve. After comparing computational performance benchmarks between intel and amd, I have made my mind to stick with amd.

My main question is this: Between sockets P3, TRX4, WRX8 and AM4, which socket should I be aiming for to get a decent value for the money system considering current hardware trends and also future upgrade potential? Ideally I'm looking at a CPU with minimum 24 cores (48) threads, and a minimum of 128gb RAM (the more the better, though I don't think I'll cross the 512gb threshold).

AM4 it's mostly dedicated to gamers, it's getting old and not sure if there will be future support for it.
TRX4 looks like a decent choice for enthusiasts.
WRX8 is a brand new workstation solution (expensive!!!)
P3 looks like a solid choice for server hardware and it would be my go to, however the socket has been around for some time and I'm worried by 2022 it gets replaced.

One more thing, so far I didn't have to think about my build because I run my tasks on a high performance cluster available to me due to university afiliations. I know that these servers use ECC-buffered RAM, how much of a dealbreaker is non-ECC unbuffered RAM for big data computations?

Thanks in advance,
Chris.

PS: sometimes I'm actually gaming ;)
 
Last edited:
Solution
If you hadn't added that last bit, I would have suggested just renting time in the cloud.

Well, DDR5 is about to hit the market, so pretty much anything you buy now is going to have no upgrade path for the long term. AMD is sticking with DDR4 for the Zen 3 Threadrippers though, so TRX4 and WRX8 are likely to around for quite a number of years. Unless AMD makes hybrid chips that support DDR4 and DDR5, there might not be drop in replacements to upgrade the CPU though.

ECC is about reliability, if you can afford to have your projects crash and lose the work every once in a while, not a big deal. Plenty of boards that support ECC by AMD though, so that isn't a huge concern.

AM4 is certainly end of life, and you can't get 24 cores anyway.

Eximo

Titan
Ambassador
If you hadn't added that last bit, I would have suggested just renting time in the cloud.

Well, DDR5 is about to hit the market, so pretty much anything you buy now is going to have no upgrade path for the long term. AMD is sticking with DDR4 for the Zen 3 Threadrippers though, so TRX4 and WRX8 are likely to around for quite a number of years. Unless AMD makes hybrid chips that support DDR4 and DDR5, there might not be drop in replacements to upgrade the CPU though.

ECC is about reliability, if you can afford to have your projects crash and lose the work every once in a while, not a big deal. Plenty of boards that support ECC by AMD though, so that isn't a huge concern.

AM4 is certainly end of life, and you can't get 24 cores anyway.
 
  • Like
Reactions: chrismath
Solution

JWNoctis

Respectable
Jun 9, 2021
443
108
2,090
For your compute needs, you may still want to look into cloud services instead, which could be more cost effective for your use case.

For gaming, it's a separate problem for which the optimal solution still lie with a dedicated gaming rig, since few games would benefit from 24 cores, memory much above 64GB, or ECC memory.

AM4 is still perfectly good for a gaming PC, except for the limited upgradability.

But if you want physical hardware of your own for all your needs, wait for Zen 3 Threadripper.
 
  • Like
Reactions: chrismath

kanewolf

Titan
Moderator
I am building a system for big data simulations and large scale statistical modelling and would like to hear opinions on what is the best route considering currently available hardware and the ability to upgrade in the future. For my statistical modelling I require high CPU and RAM capacity. Computations on GPU are not fully supported yet for the problems I'm trying to solve. After comparing computational performance benchmarks between intel and amd, I have made my mind to stick with amd.

My main question is this: Between sockets P3, TRX4, WRX8 and AM4, which socket should I be aiming for to get a decent value for the money system considering current hardware trends and also future upgrade potential? Ideally I'm looking at a CPU with minimum 24 cores (48) threads, and a minimum of 128gb RAM (the more the better, though I don't think I'll cross the 512gb threshold).

AM4 it's mostly dedicated to gamers, it's getting old and not sure if there will be future support for it.
TRX4 looks like a decent choice for enthusiasts.
WRX8 is a brand new workstation solution (expensive!!!)
P3 looks like a solid choice for server hardware and it would be my go to, however the socket has been around for some time and I'm worried by 2022 it gets replaced.

One more thing, so far I didn't have to think about my build because I run my tasks on a high performance cluster available to me due to university afiliations. I know that these servers use ECC-buffered RAM, how much of a dealbreaker is non-ECC unbuffered RAM for big data computations?

Thanks in advance,
Chris.

PS: sometimes I'm actually gaming ;)
Does your problem work with distributed resources (clusters)? If so two (or more) hosts might be your best investment. That way if one has an issue, you still have resources available.
For BIG CPU needs, you really should look into cloud resources.
 
  • Like
Reactions: chrismath

chrismath

Distinguished
Mar 1, 2012
15
0
18,510
Thanks for the replies!

I have thought about a cloud solution, however an example of optimized memory cloud computing system with 24 cores, 192gb ram was priced at 2.7$ per hour. For 8 hours daily usage, this ammounts to 7,884$ per year. In that case I might be better off with a mid tier sp3 server with a 2nd gen epyc at 24 cores and 128gb ddr4 which I can build at my place for about 3.5K $ (e.g. motherboard H12SSL-i, cpu EPYC 7352 , ram 8*16 gb ddr4 @ 3200 mhz, psu 80+ platinum at 750 watt , 1T ssd, mid range gpu and case).

PS: I mostly care about running my computational projects which are RAM heavy and CPU demanding. It would be nice however to also be able to play the latest titles at 60 fps, high graphics at the humble 1080p resolution.
 

kanewolf

Titan
Moderator
Thanks for the replies!

I have thought about a cloud solution, however an example of optimized memory cloud computing system with 24 cores, 192gb ram was priced at 2.7$ per hour. For 8 hours daily usage, this ammounts to 7,884$ per year. In that case I might be better off with a mid tier sp3 server with a 2nd gen epyc at 24 cores and 128gb ddr4 which I can build at my place for about 3.5K $ (e.g. motherboard H12SSL-i, cpu EPYC 7352 , ram 8*16 gb ddr4 @ 3200 mhz, psu 80+ platinum at 750 watt , 1T ssd, mid range gpu and case).

PS: I mostly care about running my computational projects which are RAM heavy and CPU demanding. It would be nice however to also be able to play the latest titles at 60 fps, high graphics at the humble 1080p resolution.
You should use the AWS calculator to get the best approximation. But, just looking at the on-demand pricing you can get 32 vCPU and 128GB ram for about 1/2 the cost you listed above.
The one thing that using cloud, if only to test is you can validate how much CPU your task can use. You may find out, for example that those hyperthreaded cores don't benefit you. That only physical cores improve performance. I worked in a high performance computing environment, and we disabled the hyperthreading as part of initializing new hardware.
 
  • Like
Reactions: chrismath

kanewolf

Titan
Moderator
The other thing I learned from working in HPC, is that CPU architecture and software environment matter. The Intel math kernel library can make HUGE performance improvements when using Intel CPUs and the Intel compiler.
You also need to look at memory access. Newest CPUs have 6 or 8 channel memory interfaces. Your 128GB RAM might be the wrong amount because you have to populate all the memory channels for optimum performance. So if you have a Xeon with 6 channel memory interface, 96GB or 192GB would be the best answer.
 
  • Like
Reactions: chrismath

JWNoctis

Respectable
Jun 9, 2021
443
108
2,090
Or if you are feeling adventurous, you can hand-optimize your code to architecture like what the devs of y-cruncher and Prime95 did. Indeed, Intel might well have a performance advantage (including on current, but sadly not future, generation consumer CPUS) if your use case could make efficient use of AVX-512.