Review Mushkin Redline ECC Black DDR4-3600 C16 Review: Overclocked ECC Performer

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
My home-lab workstations may occasionally see some gaming action during after hours, but their primary use case is IT architecture design with nested virtualization and machine learning workloads.

So they got 128GB of RAM or a billion bits and at that point I start worring about bit flips with long running critical workloads. So I put ECC on them for peace of mind and was quite glad DDR4-3200 ECC mocules were available at all when the Ryzen 9 5950X launched, even if I had to hunt them down piecemeal (3 purchases over several weeks to get 4 DIMMs!).

I tested the system in the mean-time with a dual DDR4-4000 kit and yes, synthetic benchmarks showed slightly better results. Can't say I ever noticed in gaming but the main issue there is that at native 4k on a 42" screen even an RTX 2080ti is "challenged", while the next gen GPUs, even an RTX3090ti doesn't seem guaranteed to do 4k at high quality settings, either.

The original IBM-PC came with parity protecting extra DIL RAM chips in 1981 and I've always felt safer knowing that I'd at least be notified when RAM were to go bad. So I'm glad ECC is becoming more mainstream with DDR5, but then susceptability to Row-Hammer like attacks is also rising steeply as density creeps up.
 

Lord_Moonub

Commendable
Nov 25, 2021
12
11
1,515
Small point here. The use of penultimate in the article is questionable.

Ultimate usually refers to the premier position, number 1 in a ranking.

Penultimate does literally mean “last but one” in a sequence. In the article it is used to indicate “last but one” in a list. It’s not technically wrong but in a sense the assumed sort order for the list when using this expression would be “worst to best” not “best to worse”. So using penultimate this way is likely to cause confusion. So readers would assume the penultimate performer to be second best, not second to worst.
 

abufrejoval

Reputable
Jun 19, 2020
333
231
5,060
So I went ahead and ran Sandra myself just to compare:

1. Ryzen 9 5950X 16 core with Kingston DDR4-3200 and 22 clock timings I get slightly better than 36GB/s bandwidth with dual channel memory and a rather unimpressive sounding 73ns of latency. The various caches, including 2x32MB of L3 seem to help alleviate that, because the machine is certainly no slouch.

2. Haswell Xeon E5-2696v3 18 core with Micron DDR3-2133 and 15 clock timings I get 51,3GB/s bandwidth because it's 4 channels and a much better sounding 42nm of latency. Again the unified 45MB L3 as well as dedicated L1/L2 should help with workloads that have good locality.

The first system actually started with a Ryzen 7 5800X and at that point the two very almost an exact match for multicore workloads like Blender, even energy consumption for the CPUs ware fairly similar at around 110Watts, even if the Haswell is officially a 150 Watt TDP SKU. Of course the monolithic Intel chips was vastly bigger at 22nm and vastly more expensive at official prices.

But I got mine cheap from China, where the OEM top bin E5-2696v3 (with 3.8GHz peak clock) were sold to hyperscalers at a fraction of E5-2699v3 list prices. Of course on single threaded workloads the Ryzens thrash any Haswell and the 5950X adds another 60% to what the 5800X can do with only 30 more Watts (140 instead of 110).

Yes ECC DIMMs rarely qualify as speed daemons and Sandra has RAM look rather bad on AMD Zen3, but thankfully that doesn't seem to translate to applications. For pure gamers ECC most likely is a waste of money but if you're also running long money maker workloads, ECC won't put you back too much: neither in money nor in performance.