News DDR5 Specifications Land: Up To 8400 MHz, Catering To Systems With Lots of Cores

bit_user

Polypheme
Ambassador
Presumably, on-die ECC is optional? Is there also any form of parity on the data/address bus?

I also wonder how DDR5 latencies will compare with earlier memory generations (i.e. in terms of ns).
 
Presumably, on-die ECC is optional? Is there also any form of parity on the data/address bus?

I also wonder how DDR5 latencies will compare with earlier memory generations (i.e. in terms of ns).

I would assume that like most it will go up and will be the trade off for the higher speeds. But with time we will also see higher speeds and latency to go down from original highs.
 

bit_user

Polypheme
Ambassador
I would assume that like most it will go up and will be the trade off for the higher speeds. But with time we will also see higher speeds and latency to go down from original highs.
Here's an interesting comparison of DDR3 vs. DDR4. In their tests, the best DDR4 latencies never quite equaled the best DDR3, which is roughly as I expected.


Note that two of their test systems are dual-channel, while the other two are quad-channel. Presumably, that explains the differences within the same memory speed grade. It's also interesting to see how much channel-doubling can compensate for lower memory speeds (note: 2x the channels isn't simply 2x as fast).
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
Does this even matter besides in enterprise level applications? Especially on the Intel side, you're well into the who cares range of performance increases by the time you're in the mid 3000's. Moving from dual channel to quad channel also rarely results in any tangible improvements. Is there going to be a significant drop in latency?
 

Chung Leong

Reputable
Dec 6, 2019
494
193
4,860
Presumably, on-die ECC is optional? Is there also any form of parity on the data/address bus?

Sounds like it's not. They mentioned how ECC is going to improve node scaling by correcting single-bit errors. Presumably too many chips would not work reliably without it.
 

bit_user

Polypheme
Ambassador
Moving from dual channel to quad channel also rarely results in any tangible improvements.
Is that still true for 8+ cores?

I feel like Intel knew what they were doing, when they outfitted their server and workstation platforms with > 2 channels.

Is there going to be a significant drop in latency?
If you check the techspot link, the quad-channel setups have higher latency. Of course, they're also both 1 generation older than the 2-channel CPUs.
 
To those saying faster memory isn't needed or more channels aren't needed, they surely are especially in the enterprise. You can't keep a 64 core CPU properly fed with dual channel 3200Mghz memory ie huge performance killing memory bottleneck. This is also why you don't see SMT4 because until memory bandwidth increases SMT4 is pretty pointless for many workloads as you would never keep the CPU fed unless you build in a ton of level 4 cache like the IBM power 9 chips did with 128 MiB of level 4 cache.
 
Here's an interesting comparison of DDR3 vs. DDR4. In their tests, the best DDR4 latencies never quite equaled the best DDR3, which is roughly as I expected.


Note that two of their test systems are dual-channel, while the other two are quad-channel. Presumably, that explains the differences within the same memory speed grade. It's also interesting to see how much channel-doubling can compensate for lower memory speeds (note: 2x the channels isn't simply 2x as fast).

It is interesting but mostly expected. I expect higher latencies as we get to faster speeds and there will be some trade offs depending on what you are doing with the system as some things like tighter timings and lower latencies while others will love the extra bandwidth.

Is that still true for 8+ cores?

I feel like Intel knew what they were doing, when they outfitted their server and workstation platforms with > 2 channels.


If you check the techspot link, the quad-channel setups have higher latency. Of course, they're also both 1 generation older than the 2-channel CPUs.

I am pretty sure you are right. I think beyond 8 cores dual channel will be worthless and bottleneck the CPU especially in workstation and server environments. Even AMD went with quad channel to catch up with Intel in HEDT.
 

bit_user

Polypheme
Ambassador
I am pretty sure you are right. I think beyond 8 cores dual channel will be worthless and bottleneck the CPU especially in workstation and server environments. Even AMD went with quad channel to catch up with Intel in HEDT.
I wouldn't say "worthless". The 16-core Ryzen 3950X somehow manages with only dual-channel, but that's not to say it wouldn't be even faster, with quad-channel.
 

bit_user

Polypheme
Ambassador

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
According to whom? If you've got a good source on that, I'd like to see it.

According to this, the Ryzen 3900X shows continued performance improvements up to DDR4-4000.

No, it doesn't. It shows ddr4-3600 is the "winner." And I put that in quotes, because it beat lowend 2666 by 2.8%. 4000 has a 50% clock increase increase over 2666 while beating it by 2.6%. If that's your counter argument to why we should care about DDR5, it's not convincing.

https://www.cgdirector.com/single-dual-quad-channel-memory-threadripper/

Production workstation benchmarks. Only using DDR4 2666 because that's what actual workstations use. Lower RAM speed should mean larger boost from increased channels. The results speak for themselves. DDR5 for the home user isn't going to mean squat from a performance standpoint.
 

bit_user

Polypheme
Ambassador
No, it doesn't. It shows ddr4-3600 is the "winner." And I put that in quotes, because it beat lowend 2666 by 2.8%.
Try not to be so obtuse. It makes you look like a troll.

Anyway, I'll break it down for you. Different benchmarks stress different things. Some are hardly affected by memory performance, while others are primarily bandwidth-sensitive and yet others are more latency-sensitive. If you look at the whole page, you'll see quite a bit of variation across the different tasks.

In the physics simulation, 4000 is the winner and beats 2400 by 7.4%. In the Game Development test, it pulls a win of 2.4% over the next best entry.

If that's your counter argument to why we should care about DDR5, it's not convincing.
It's a 12-core CPU @ 3.8 GHz base clock speed. What do you think happens when you crank up the core count, clock speed, and IPC even higher?

That's a 1900X - the lowest-end, first-gen threadripper. It has only 8 cores, big L3 caches and Infinity Fabric gets in the way. Try that with a 3990X.

And, again, different workloads have different sensitivity to memory bandwidth. Just because those rendering tests don't show much impact doesn't mean other workloads won't.

DDR5 for the home user isn't going to mean squat from a performance standpoint.
Nonsense. You didn't look at any iGPU benchmarks. Gaming-wise, that's where you're going to see the biggest impact of memory performance.

Also, you're ignoring everything besides bandwidth. The voltage drop will make it more power-efficient (meaning longer battery-life in laptops and phones) and the density improvements will mean higher memory capacities with fewer and lower-ranked DIMMs, which is a cost-savings.

And, if ECC is in even the consumer version, it could improve reliability and perhaps further reduce costs by improving yield.
 
Last edited:
I agree with @bit_user . That test is slightly flawed. Even in the comments someone asked about a Xeon and using these results stated it wouldn't matter. Well thats not even close to true as Intel and AMD have very different interconnects and things that affect AMD don't affect Intel the same way.

I would have preferred to see both platforms and higher end CPUs with more core counts in the tests. . Thats the biggest argument here is that more cores will need more bandwidth to keep all the cores happy and that should be true so long as all the cores can be used. In a workstation scenario that is heavy on RAM use higher bandwidth should benefit the system when memory has to dump and load data.

In the consumer space though I would agree that the bandwidth will not benefit. Unless somehow consumer applications become vastly better at multithreading and a game for example can take advantage of all available cores and needs to move data from RAM to CPU to GPU fast.
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
In the consumer space though I would agree that the bandwidth will not benefit. Unless somehow consumer applications become vastly better at multithreading and a game for example can take advantage of all available cores and needs to move data from RAM to CPU to GPU fast.

Which is the only thing I was asking about, the consumer space. Yet, for some reason it kept getting pulled off topic to enterprise and years outdated platforms. Intel has hex-channel chipsets. There are use cases for insane memory throughput. I am well aware of that. No matter what the engineers come up with, someone will come up with a use for it. However, I don't give a ____ about any of that, only what will benefit me as an enthusiast user. All the responses up until this one have failed to address that exact point.
 

bit_user

Polypheme
Ambassador
Which is the only thing I was asking about, the consumer space.
No, you asked "Does this even matter besides in enterprise level applications?", which opens up the field to gaming PCs, HEDT, and upper-end mainstream.

Yet, for some reason it kept getting pulled off topic to enterprise and years outdated platforms.
Excuse me? Since when is Ryzen 3900X an enterprise or "years outdated" platform?

only what will benefit me as an enthusiast user. All the responses up until this one have failed to address that exact point.
WTF? If that's all you cared about, then maybe that's what you should've asked about! Instead of asking a broad question about non-enterprise, you should've said "I use a [2|4]-channel [laptop|desktop] CPU and applications X, Y, and Z. Does this have any relevance for me?"

I can only answer the questions which are asked. Don't go blaming me for your own poorly-phrased question!
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
No, you asked "Does this even matter besides in enterprise level applications?", which opens up the field to gaming PCs, HEDT, and upper-end mainstream.


Excuse me? Since when is Ryzen 3900X an enterprise or "years outdated" platform?


WTF? If that's all you cared about, then maybe that's what you should've asked about! Instead of asking a broad question about non-enterprise, you should've said "I use a [2|4]-channel [laptop|desktop] CPU and applications X, Y, and Z. Does this have any relevance for me?"

I can only answer the questions which are asked. Don't go blaming me for your own poorly-phrased question!

You're arguing what question I was asking. If there is a more textbook example of someone trying to win an internet argument, I'm not aware of it.
 

bit_user

Polypheme
Ambassador
You're arguing what question I was asking. If there is a more textbook example of someone trying to win an internet argument, I'm not aware of it.
Again, don't be so obtuse, unless you really want people to think you're just here to troll.

All I said was that I tried to answer the question you actually asked. Don't then turn around and criticize me for not answering a different question. By twisting it, perhaps you're just revealing your true motives.
 
See my point about iGPUs. You can find loads of benchmarks that show memory speed has a significant impact on iGPU performance. If that's not relevant to "the consumer space", then I don't know what is.

Slipped my mind. You have to forgive me since, well I am so used to having discrete that I always forget Intel has iGPUs in every mainstream chip. But yes higher bandwidth will help those vastly and if Intels newer Xe GPUs can get more bandwidth it may finally bring decent entry level performance to a single chip.
 
  • Like
Reactions: bit_user