Review Kingston Fury Renegade DDR5-6400 C32 2x48GB Review: Heavy Duty Memory Kit

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
Since I use my systems as workstations that need reliablility over speed, I'm only now doing my first DDR5 system, based on a Ryzen 9 7950X3D, which follows various 5950X builds using 4x32GB of DDR4-3200 ECC and earlier Xeons E5 v3/v4, also with 128GB of ECC RAM.

As usual, I simply ordered four of the fastest Kingston "Server RAM" ECC unregistered 32GB modules available, which was DDR5-5600, as the markup for 192GB in 48GB modules was rather stiff, to go with an ASUS X670E Pro WIFI and the 7950X3D.

I put things togther as an open assembly on the dinner table for testing and watched it post... evidently not.

It got stuck with a yellow LED, indicating a RAM issue.

To cut the very long story slightly shorter: I had totally underestimated just how long DDR5 systems would try to train RAM on a first boot: Four minutes before a 128GB RAM system shows any sign of life is evidently just the new normal.

Unfortunately it will just continue to show that yellow LED forever, if it fails: there is no visible distinction between the ongoing test and a failed test on these ASUS boards. That's how it took days to figure things out.

So eventually it would boot with those four modules, but set DDR5-3600 timings with about half the expected bandwidth or nearly identical to DDR4 without the economy.

It would not boot at all with four DDR5-5200 ECC Kingston modules, which support almost the same range of timings down to a theoretical DDR5-1600.

It would easily reach the promised timings with only two sticks. I could even overclock the DDR5-5600 modules to what seemed a reliable DDR5-6000 ECC, when using only two sticks.

But even after hours of reading through forum posts, proposed timings and terminating resistance parameters and trying them, I could never go faster than DDR5-4200 with four of the DDR5-5600 modules.

And according to AMD with that I was clearly out of spec, something I totally try to avoid with my max reliability approach: I pay extra for ECC for a reason!

My contact with ASUS support was quick and friendly, but as per the compatibility databases from all mainboard vendors basically nobody offers official ECC support with four sticks. Nor could I find a RAM vendor qualified for 192GB even without (external) ECC.

For now IMHO DDR5 has quite simply deteriorated to a one DIMM per channel RAM architecture and I'm currently waiting for two 48GB DDR5-5600 ECC modules, since that's the only way to come near the RAM capacity which I've become accustomed to, even if I don't always need it.

I wish there was a way to switch between 64GB-fast and 128GB-slower modes on the fly, much like the 7950X3D is a dual-nature CPU, both for 8-core gaming and 16-core productivity workloads.

I don't mind paying a little extra for RAM, but paying the 128GB equivalent to be limited to 96GB isn't how I imagined progressing from AM4 to AM5.

Perhaps toms's hardware could do a piece on high capacity DDR5 with ECC support across architectures and vendors, as the benefits of ECC clearly scale witih RAM size.
 
Last edited:

JWNoctis

Reputable
Jun 9, 2021
460
120
5,090
In my limited(n=1) experience, these large non-binary modules seem to have pretty loose secondary timings that you can try to tighten by hand. But that's coming from someone whose Corsair 2x48GB 6000C30 kits - same 3GB Hynix M-die as in the review - won't stabilize at 6400, even with an uncomfortable overvolt and VSoC pegged to 1.3v on a 7800X3D.

I made it stable at tRAS=48 and tRCD=500, getting to 64ns on AIDA64 latency benchmark (from 79ns(!) on stock A-XMP) with 1.36V VDD and 1.1V SoC and called it a day, after three days of trying and three more of various memory stress tests. In retrospect, that probably had as much to do with the processor not stabilizing at 2133MHz FCLK, as well.

Considering that this kit costed more than the V-Cache processor, I'm in no haste to not make it last as long as possible.

Since I use my systems as workstations that need reliablility over speed, I'm only now doing my first DDR5 system, based on a Ryzen 9 7950X3D, which follows various 5950X builds using 4x32GB of DDR4-3200 ECC and earlier Xeons E5 v3/v4, also with 128GB of ECC RAM.

As usual, I simply ordered four of the fastest Kingston "Server RAM" ECC unregistered 32GB modules available, which was DDR5-5600, as the markup for 192GB in 48GB modules was rather stiff, to go with an ASUS X670E Pro WIFI and the 7950X3D.

I put things togther as an open assembly on the dinner table for testing and watched it post... evidently not.

It got stuck with a yellow LED, indicating a RAM issue.

...

So eventually it would boot with those four modules, but set DDR5-3600 timings with about half the expected bandwidth or nearly identical to DDR4 without the economoy.

It would not boot at all with four DDR5-5200 ECC Kingston modules, which support almost the same range of timings down to a theoretical DDR5-1600.

...

I seem to recall someone mentioning how they stabilized a 4x32GB setup at 6000C30 somewhere, with ZenTiming screenshot, when I researched what to do with mine.

Hopefully Ryzen 9000 would eventually come with a stronger memory controller, and maybe more IF bandwidth as well. Or maybe we'd finally get CXL on PC, which could be a good use for PCIe 5.0 bandwidths at long last.
 
  • Like
Reactions: abufrejoval
For now IMHO DDR5 has quite simply deteriorated to a one DIMM per channel RAM architecture and I'm currently waiting for two 48GB DDR5-5600 ECC modules, since that's the only way to come near the RAM capacity which I've become accustomed to, even if I don't always need it.
AMD's memory controller seems to struggle quite a bit with DDR5 frequencies without the offset ratios. Intel is significantly better with regards to memory controller, but the frequency loss for 2DPC isn't insignificant. For 2DPC I'd expect RPL to run 5600+ without any trouble, but ADL is likely to be luck of the draw. It's possible that with clock drivers, which are required for 6400+ JEDEC, 2DPC support may be better.
 
  • Like
Reactions: abufrejoval

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
In my limited(n=1) experience, these large non-binary modules seem to have pretty loose secondary timings that you can try to tighten by hand. But that's coming from someone whose Corsair 2x48GB 6000C30 kits - same 3GB Hynix M-die as in the review - won't stabilize at 6400, even with an uncomfortable overvolt and VSoC pegged to 1.3v on a 7800X3D.

I made it stable at tRAS=48 and tRCD=500, getting to 64ns on AIDA64 latency benchmark (from 79ns(!) on stock A-XMP) with 1.36V VDD and 1.1V SoC and called it a day, after three days of trying and three more of various memory stress tests. In retrospect, that probably had as much to do with the processor not stabilizing at 2133MHz FCLK, as well.
Perhaps with a more detailed debug display or port I might have investigated more, but just trying to duplicate some of the settings people had reported success with, generally resulted in hung memory even without increasing the RAM clocks. After two days of experimentation I simply asked myself if I could live better with the bandwidth constraints or the reduced amount of RAM. And since I'm only replacing one of three 128GB systems I opted for a RAM reduction, which need not be final if I decide to swallow the bandwidth pill.

I'm just surprised that I found so little talk about how much things have changed for DDR5 in 2DPC scenarios, it was really all centered on the price difference not the capacity issues.

And on one of my Ryzen 5950X systems I had actually started with only 1DPC of DD4-3200 ECC and upgraded later, superficially the same Kingston DIMMs but on closer inspections they were even mixed vendors (Samsung & Hynix), yet work seamlessly together mixed in both channels without issue there.
Considering that this kit costed more than the V-Cache processor, I'm in no haste to not make it last as long as possible.



I seem to recall someone mentioning how they stabilized a 4x32GB setup at 6000C30 somewhere, with ZenTiming screenshot, when I researched what to do with mine.

Hopefully Ryzen 9000 would eventually come with a stronger memory controller, and maybe more IF bandwidth as well. Or maybe we'd finally get CXL on PC, which could be a good use for PCIe 5.0 bandwidths at long last.
I've seen those screenshots and tried replicating them, mostly in terms of termination Ohms and voltages, without raising the clocks just yet, but it just stopped the modules from working at all which are using all Hynix dies, too.

I'd like to see a bit more in-depth reporting on Intel vs. AMD here, just to understand how much of it is barriers of physics vs. limiting silicon budget engineering.

When you push signals at these speeds across copper traces, every millimeter obviously counts. But this steep a penalty for 2DPC seemed somewhat novel, until I remembered that with Kabini, there was a somewhat similar issue at DDR3-2400, which just wouldn't work with 2DPC.

In both cases AMD's spec clearly present the limitations, and with Kabini I remember that they just couldn't be overcome.

So perhaps I've just been "misled" by the AM5000 series doing so much better in 2DPC than before and after...

My initial fear was that ASUS had simply "cheaped out" on testing high-capacity 2DPC modules, given the gamer focus of these entry level boards.

But then I saw the story repeating itself across vendors and now my impression is that while you may luck out with 1DPC bandwiths on a 2DPC setup, you are almost guaranteed to pay for it in safety margin and in my line of work I at least need to be able to choose between "safe" and "fast", depending on the workload and ideally at run-time.

That would a rather welcome feature, being able to activate the 2nd set of modules only when I'm running LLMs or extra VMs, while I could opt for speed otherwise.
 
  • Like
Reactions: JWNoctis

JWNoctis

Reputable
Jun 9, 2021
460
120
5,090
Perhaps with a more detailed debug display or port I might have investigated more, but just trying to duplicate some of the settings people had reported success with, generally resulted in hung memory even without increasing the RAM clocks. After two days of experimentation I simply asked myself if I could live better with the bandwidth constraints or the reduced amount of RAM. And since I'm only replacing one of three 128GB systems I opted for a RAM reduction, which need not be final if I decide to swallow the bandwidth pill.

I'm just surprised that I found so little talk about how much things have changed for DDR5 in 2DPC scenarios, it was really all centered on the price difference not the capacity issues.

...

That would a rather welcome feature, being able to activate the 2nd set of modules only when I'm running LLMs or extra VMs, while I could opt for speed otherwise.
For that use case, it might be worth considering a non-Pro Threadripper that won't break the bank as much, at some point. I think there are 4x48GB DDR-7200 kits for that now - ECC RDIMM too, if I recall. That kind of setup could net you somewhere around 2token/s instead of 0.6token/s* on a consumer Ryzen, for a hypothetical LLM that would need 96GB to run, turning a slog into something that might be marginally useful in production. But I digress.

*(I'm under the impression that AM5 Ryzen 7000s are bottlenecked by IF bandwidth on absolute memory bandwidth, 64GB/s read and nCCD*32GB/s write, assuming 2000MHz FCLK. Though it won't actually matter, except on exotic workloads like, as you say, LLMs.)
 

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
For that use case, it might be worth considering a non-Pro Threadripper that won't break the bank as much, at some point. I think there are 4x48GB DDR-7200 kits for that now - ECC RDIMM too, if I recall. That kind of setup could net you somewhere around 2token/s instead of 0.6token/s* on a consumer Ryzen, for a hypothetical LLM that would need 96GB to run, turning a slog into something that might be marginally useful in production. But I digress.

*(I'm under the impression that AM5 Ryzen 7000s are bottlenecked by IF bandwidth on absolute memory bandwidth, 64GB/s read and nCCD*32GB/s write, assuming 2000MHz FCLK. Though it won't actually matter, except on exotic workloads like, as you say, LLMs.)
You may well be right, but it's not just purchase price, but also power and heat. With threadripper cost escalate quickly and I'm not running LLMs (or any other workload) 24x7, mostly I'm doing experiments in the home-lab which I'll then use to build the real systems later, which are quite EPYC.

And the home-lab does see some after-hours gaming use, especially once I pass the older systems on to my kids...
 

abufrejoval

Reputable
Jun 19, 2020
400
276
5,060
Got dual KSM56E46BD8KM-48HM "Kingston Server Premier" today, DDR5-5600 [real] ECC.

With just two DIMMs there are zero issues , they'll even overclock to DDR5-6400 and are OCCT Prime95 (mixed) stable with a few hours so far.

But since the automatic timing adjustments seem to linearly extrapolate from SPD defaults, there is also very little gain in actual performance. I see such initial overclocks mostly as a stability exercise or headroom test, so I'm back to official timings and with 96GB of RAM at a 128GB price.
 
  • Like
Reactions: JWNoctis