DDR4 Frequency vs latency, what is better?

gernstsmit

Distinguished
Feb 16, 2010
53
5
18,535
Hi there,

Sorry if this is a stupid question, but hopefully it will help others as well.

I was checking out the new generation of Z170 motherboards, Skylake processors and DDR4 RAM when I spotted something that confuses me.

In the past I always preferred RAM that was "standard speed" (1600MHz) but with a low latency, eg. CL9. But while looking at the DDR4 options available I noticed that RAM with higher frequencies that are much more expensive also have higher latencies. This is counter intuitive to me. It seems that faster and more expensive RAM should have lower latencies???

I give these as an example:

Corsair CMK16GX4M2A2133C13 vengeance Lpx with blacK low-profile heatsink , with 8-layer PCB , 8Gb x 2 kit - support Intel XMP ( eXtreme Memory Profiles ) , Ddr4-2133 ( pc4-17000 , CL13 , 1.2v - 288pin - lifetime warranty @ ~ $156

VS.

Corsair CMK16GX4M2A2666C16 vengeance Lpx with blacK low-profile heatsink , with 8-layer PCB , 8Gb x 2 kit - support Intel XMP ( eXtreme Memory Profiles ) , Ddr4-2666 ( pc4-21300) , CL16 , 1.2v - 288pin - lifetime warranty @ ~ $202

Why would I want "faster" RAM if it comes with slower latency timings? Does that extra 533 MHz of RAM speed really make the overall experience that much faster, so as to completely negate the negative effects of a 3ns drop in latency speed?

Thanx for any feedback.
 


Thank you for the link.

So from the article it seems that, at least for gaming, and in most other applications as well, higher DDR4 frequencies gives little performance increase and in some cases a performance decrease.

My one doubt is if this apparent lackluster performance of DDR4 RAM is simply because the software in question was not designed to use the advantages of DDR4?

However for the moment, from a gaming perspective, it seems that forking out a bunch of cash for Skylake and DDR4 makes no sense whatsoever. At least in terms of cost vs performance.

Lets hope things will improve over the next few months with new hardware becoming cheaper and software and firmware getting optimized. But for now the outlook of Skylake and DDR4 seems bleak at best.
 
Nothing has changed, as DDR3 memories, DDR4 memories does not differ much considering different cas latency and/or frequency. Further, all memory brands have more or less the same quality. IMO, it might have some significant difference in professional competency by using LN2, I'm not that sure, though.
 


It doesn't.
The timings are relative to the clock frequency.
Actual absolute latency in ns barely changes across the range, but it means that what might be 9 wait states on DDR3 could be 15 wait states on DDR4 for the exact same delay.

Crucial have a pretty good summary of this on their website

http://www.crucial.com/usa/en/memory-performance-speed-latency

Why does this matter?

Because even 12ns latency is a long time in CPU-land, so the system memory controller is predicting ahead of time what will be requested and trying to have it ready when the CPU needs it.

If the controller gets those predictions wrong, you'll have to wait for the full latency period to get what's wanted. Somewhat like sequential vs random IO speeds on a hard drive this will have a far greater bearing on your throughput than the quoted maximum speed - and memory requests do tend to jump around a lot on a multitasking system.

One of the greatest advances in contemporary computer memory would be getting down to 1ns or lower latency (without drawing a few kW). It's unlikely to happen with DDR due to the nature of how the internals but other technologies exist - such as the blisteringly fast cache inside the CPU.

The reason this ram is not used comes down to cost and power consumption, but that can always change as future manufacturing processes mature. Crossbar might eventually get there - if it's ever released.

An additional thing to bear in mind is that the speed of light is approximately one foot per nanosecond (about 30% of that on a system board due to the effects of rf transmission lines - so 10cm per nanosecond is a rough rule of thumb) and that means time-of-flight becomes a significant contributory component to overall latencies as things get faster.
 
http://www.howtogeek.com/238990/whats-the-difference-between-ddr3-and-ddr4-ram/ ABSOLUTELY RUBBISH DDR4 . Proven slower in real world applications than DDR3. More bandwidth is great but when that comes at a cost to latentcy it is fruitless. Stick with nice DDR cl9 or cl8 will always be faster than cl16 or cl30 despite claims by developers. Also it is shown single V's dual channel to be a lot of rubbish as well with many reviews showing single chanel faster than dual due to lower latentcies. The wholly grail is clearly lower latencies and lowering voltage is not a good start to faster ram...Bring on the cl6 DDR3
 


if you bother to read the anandtech article
http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/7
"Comparing default DDR4 to a high performance DDR3 memory kit is almost an equal contest. Having the faster frequency helps for large frame video encoding (HandBrake HQ) as well as WinRAR which is normally memory intensive. The only real benchmark loss was FastStone, which regressed by one second (out of 48 seconds)."

and try not necro post