News DDR5-6400 RAM Benchmarks Show Major Performance Gains Over DDR4

Bandwidth looks great but Latency terrible which is par the course with every new memory standard. By the time i'm looking to build a DDR5 platform which won't be until maybe 2024-2025 these modules should be much better.
 
I agree, those latencies are awful, but as DDR5 matures, bandwidth should improve to a point. It'll be interesting to see how DDR5 performs going forward.
 
If it really was DDR5-6400, then CL40 would be faster than DDR4-3200 CL22. Even if it's DDR5-4800, CL40 should only be ~20% worse.

So there's either something else that's hosing up the memory module or AIDA64 doesn't quite know the best way to test the memory.
 
  • Like
Reactions: Makaveli
Bandwidth looks great but Latency terrible which is par the course with every new memory standard. By the time i'm looking to build a DDR5 platform which won't be until maybe 2024-2025 these modules should be much better.
One of the reasons for the extra lame latency is that DDR5 is ECC as standard.

No doubt someone will start to produce cheaper lower latency non-ECC DDR5 RAM if it is possible to do so. We will wait and see what happens, there is plenty of time before DDR5 hits the mass market for desktops, and then we will find out the reality of DDR5 in the desktop environment rather than this super crappy info release where we really do not know what the RAM speed even was.!
 
One of the reasons for the extra lame latency is that DDR5 is ECC as standard.

No doubt someone will start to produce cheaper lower latency non-ECC DDR5 RAM if it is possible to do so. We will wait and see what happens, there is plenty of time before DDR5 hits the mass market for desktops, and then we will find out the reality of DDR5 in the desktop environment rather than this super crappy info release where we really do not know what the RAM speed even was.!

Some more info on DDR5

There will still be ECC memory even after launch.

From Anandtech

Ryan Smith - Tuesday, July 14, 2020
"So on-die ECC is a bit of a mixed-blessing. To answer the big question in the gallery, on-die ECC is not a replacement for DIMM-wide ECC.

On-die ECC is to improve the reliability of individual chips. Between the number of bits per chip getting quite high, and newer nodes getting successively harder to develop, the odds of a single-bit error is getting uncomfortably high. So on-die ECC is meant to counter that, by transparently dealing with single-bit errors.

It's similar in concept to error correction on SSDs (NAND): the error rate is high enough that a modern TLC SSD without error correction would be unusable without it. Otherwise if your chips had to be perfect, these ultra-fine processes would never yield well enough to be usable.

Consequently, DIMM-wide ECC will still be a thing. Which is why in the JEDEC diagram it shows an LRDIMM with 20 memory packages. That's 10 chips (2 ranks) per channel, with 5 chips per rank. The 5th chip is to provide ECC. Since the channel is narrower, you now need an extra memory chip for every 4 chips rather than every 8 like DDR4."

https://www.anandtech.com/show/1591...sed-setting-the-stage-for-ddr56400-and-beyond
 
Last edited:
I rather wait and see real world performance. This increase in bandwidth don't always provide meaningful improvement for most users, unless you are running some memory bandwidth hungry application. The substantial increase in latency may result in regression in some cases.
 
If it really was DDR5-6400 [...]
According to the AIDA read performance, this was most likely DDR5-4400 or DDR5-4800 at best.
Note: ADL already seems to support DDR5-8400 according to Adata.

One of the reasons for the extra lame latency is that DDR5 is ECC as standard. [...]
This ECC-functionality is not what you might think, its not compareable to what is currently supported by server CPUs. Additionally it is only optional and it is mainly implemented for large modules, therefore most likely seen in server modules, RDIMMs, LRDIMMs, etc.
The main intention is to reduce bit failures in larger modules/chips, because manufacturing gets more expensive. And additionally it corrects only single-bit-errors. CPUs still have to provide explicit ECC-funtionality, to provide similar functionality to today.

As @watzupken already mentioned, real-world tests would be be preferable, because the much higher read performance will sometimes collide with the higher lantencies, therefore results may paint different pictures for different workloads (and for example games engines ;-))
 
Having twice as much bandwidth that yields around 30% extra performance in real-world may be nice but I'd really be curious about the cost. DDR4 3200-16-18-18 is cheap and plentiful while DDR5-6400 is currently slightly more common as and cheaper than unicorn poop. It took about two years for DDR4 to match the price and performance of low-latency DDR3, same thing happened with DDR-DDR2 and DDR2-DDR3, it will almost certainly be the same again with DDR5.
 
  • Like
Reactions: Makaveli
Some more info on DDR5

There will still be ECC memory even after launch.

From Anandtech

Ryan Smith - Tuesday, July 14, 2020
"So on-die ECC is a bit of a mixed-blessing. To answer the big question in the gallery, on-die ECC is not a replacement for DIMM-wide ECC.

On-die ECC is to improve the reliability of individual chips. Between the number of bits per chip getting quite high, and newer nodes getting successively harder to develop, the odds of a single-bit error is getting uncomfortably high. So on-die ECC is meant to counter that, by transparently dealing with single-bit errors.

It's similar in concept to error correction on SSDs (NAND): the error rate is high enough that a modern TLC SSD without error correction would be unusable without it. Otherwise if your chips had to be perfect, these ultra-fine processes would never yield well enough to be usable.

Consequently, DIMM-wide ECC will still be a thing. Which is why in the JEDEC diagram it shows an LRDIMM with 20 memory packages. That's 10 chips (2 ranks) per channel, with 5 chips per rank. The 5th chip is to provide ECC. Since the channel is narrower, you now need an extra memory chip for every 4 chips rather than every 8 like DDR4."

https://www.anandtech.com/show/1591...sed-setting-the-stage-for-ddr56400-and-beyond
Thank you for this clarification 👍😀
 
  • Like
Reactions: Makaveli
Btw, just to point out that xbox/PS4 are already using GDDR6 RAM for their processors. No doubt its custom SOC but it does show that Zen2 has no problems with GDDR6 memory.
 
Btw, just to point out that xbox/PS4 are already using GDDR6 RAM for their processors. No doubt its custom SOC but it does show that Zen2 has no problems with GDDR6 memory.
GDDR6 and DDR4/5 are completely different things and also the "version numbers" have nothing in common, because these standards are developed independently. For example GDDR5 is more closely related to DDR3 than to DDR4 according to its internals.
Additionally, these memory solutions solve different problems. DDR (RAM) has a low latency, which is important to support CPUs with multiple threads with different workloads and heavy random access in small chunks.
On the other hand, GDDR (VRAM) has a much wider interface and is mainly laid out for large bulk transfers, as typically needed in GPU-related workloads.

The consoles use GDDR, because it would be too expensive to implement two memory subsystems and here the graphics workload has the higher priority, so the CPU subsystem has to get along with the GDDR6.

*) And btw., also DDR5 and LPDDR5 are not directly related to each other. (For example Tiger Lake already can handle LPDDR5, but most or all manufacturers currently consider LPDDR5 as being too expensive, therefore all (as far as I know) designs use LPDDR4X at best. Additionally most likely the iGPU would profit from LPDDR5; its a simple trade-off.)