Adata Also Announces Its DDR4 Overclocking Memory

Status
Not open for further replies.

heero yuy

Distinguished
Jul 25, 2010
494
0
18,810
CL17? WHAT THE HELL?

well DDR4 is getting off to a great start... higher bandwidth (but nothing faster than what can already be found in DDR3 and i doubt the prices will be much better) and crappy timings... (you could probably get ddr3 3000 with lower timings than this)
 

InvalidError

Titan
Moderator

But not at 1.2V... or even 1.5V.

The chip-making process used to achieve high-density DRAMs is notoriously bad at high-speed stuff (making smaller DRAM cells requires a substrate with higher dielectric constant and lower leakage, which translates into FETs with higher gate capacitance and lower electron mobility, making them slower to switch) and with less voltage swing to help those transistors switch, latencies go up to give them more time to do so.

Higher bandwidth memory with somewhat worse latency is still win-win for IGPs.
 

koolkei

Honorable
Jul 24, 2013
17
0
10,510
i really feel this is too early...... i mean it hasn't matured enough, and in a year, we'll probably see DDR4-3200 is the normal high end RAM and the overclocked version have MUCH higher freq..... this is just what i think tho
 

Shneiky

Distinguished
2800 Mhz at 17 timings.... Still beaten up by 1600 Mhz 9 latencies in productivity software. Hope we can at least see 1866 with 9 timings or 2133 with 9 timings when this technology matures. (For the record, productivity software is much more sensible to latency, thats why I am so fixed on it).
 

boller

Reputable
Aug 7, 2014
5
0
4,510
So, if I got this right, it might be possible that 6c Ivy-E with good DDR3 sticks attached will beat the crap out of 6c Haswell-E... And for long enough time so that Broadwell-E will pop up before DDR4 is good enough... Interesting...
 

InvalidError

Titan
Moderator

Latency in terms of clock cycles is getting worse with every generation and the reason for that is that real latency in terms of actual time in ns is progressing much more slowly than data rates are: DDR1-400-3 is 15ns latency while 2133-10 is 9.4ns.

Net DRAM latency has improved by about 33% over the past decade while bandwidth has quintupled.
 

razor512

Distinguished
Jun 16, 2007
2,134
71
19,890
CL17? WHAT THE HELL?

well DDR4 is getting off to a great start... higher bandwidth (but nothing faster than what can already be found in DDR3 and i doubt the prices will be much better) and crappy timings... (you could probably get ddr3 3000 with lower timings than this)


The timings are high so that if the laws of physics were to ever change, you will have time to manipulate the electrons by hand in between RAM operations. :)
 

Shneiky

Distinguished
@InvalidError

The article gives 2133 MHz with CL15 timings. The 2133 with 10 timings that you use to produce 9.4 nanoseconds will be in this case 14.1 for the 2133 with 15 timings. While my good old 1333 at 9 sits at 13.5 and the 1600 at 9 sits at 11.2. In this case, even one of the pricy 1333-7 does better with 10.5. And a 1333-7 has almoust 40% less latency than the 2133-15. As I said, I hope the technology matures. I was super hyped about DDR4 (I am waiting for DDR4 in the mainstream socket to upgrade my 2700K, well maybe ... if it is worth it). But apparently for now the only benefits are higher density and lower voltage. (Which in a typical desktop does not matter. the power reduction is close to small single digit watts). Hope that DDR4 matures by the time Skylake hits and we can get some descently priced 2133-9 kits of 2x8 or so.
 

InvalidError

Titan
Moderator

You can get 2133-9 DDR3 right now for about the same price as 1600-9 and in 1.5V grade at that. Just get a Skylake board with DDR3 slots instead of DDR4 if timings bother you that much since Skylake is supposed to support both.

High latency DDR4 would make more sense on laptops, tablets and phones where shaving 500mW on memory while bumping bandwidth by 30-50% would be a big deal. Latency does not matter much to GPUs and IGPs.
 

Steve Simons

Honorable
May 31, 2014
105
0
10,710
Seems like the basic engineering is designed with the tablet/phone market in mind. Lower voltage = less heat and less power consumption which = longer battery life. Of course, they need to come a long way before they are small enough to fit into said devices, but I like where it's going.
 

InvalidError

Titan
Moderator

A DDR4 die on a given process and capacity is practically the same size as a DDR3 die on the same process and capacity... maybe a few microns bigger due to the extra control wires and logic for the internal bank count doubling. Add the packaging around the die and the miniscule size difference vanishes completely. DDR4 does not need to get any smaller to be worth considering in mobile device because the size difference vs DDR3 is effectively nonexistent.

The main problems for mobile is lack of mobile chips that actually support DDR4 and the currently much higher cost of DDR4 DRAM chips: if you check electronic parts distributors like Arrow, you can get DDR3 for ~$5 while DDR4 is ~$20 for a 4Gbits chip... so DDR4 currently carries a 300% price premium on the parts open-market.
 

Shin-san

Distinguished
Nov 11, 2006
618
0
18,980
I thought this would happen with DDR4. I got a DDR2 system as DDR3 was coming out, and the DDR3 speeds at the time wasn't that much higher.

Also, a DDR4 advantage is that those speeds are standard speeds. DDR3-2133 is a JEDEC standard, but I think anything higher is an Intel standard. So, starting out at DDR4-2133 MHz isn't too bad.
 

adamsunderwood

Honorable
Jul 18, 2012
25
0
10,540
DDR4 is already obsolete with HMC technology offering more than 18x the bandwidth. With HMC expected to begin mass production this year and GPUs to start featuring it possibly as early as 2015, I get the feeling DDR4, as a standard, is DOA and not worth the investment. (Not to mention you also have numerous innovations in memristors, ReRAM and PCM, that may very well make that whole computational model obsolete before DDR4 ever has a chance to reach maturity.)
 

eldragon0

Honorable
Oct 8, 2013
142
0
10,690
DDR4 is already obsolete with HMC technology offering more than 18x the bandwidth. With HMC expected to begin mass production this year and GPUs to start featuring it possibly as early as 2015, I get the feeling DDR4, as a standard, is DOA and not worth the investment. (Not to mention you also have numerous innovations in memristors, ReRAM and PCM, that may very well make that whole computational model obsolete before DDR4 ever has a chance to reach maturity.)

http://en.wikipedia.org/wiki/Hybrid_Memory_Cube


Where the heck are you getting your info?
 

Steveymoo

Distinguished
Jan 17, 2011
227
0
18,680
OK, sure, these chips have higher bandwidth, and lower voltage. But an increase in latency. Higher bandwidth - positive. Lower voltage - Great, but not really necessary, you will save less than a watt of power per stick. Higher latency - Not acceptable for large stacks of bitty information. I suppose benchmarks will prove me wrong, but I suspect DDR4 will not improve performance for all applications.
 

InvalidError

Titan
Moderator

Some tasks like image/video editing and IGPs are better off with more bandwidth while others like parsing/compiling code are more sensitive to latency due to high concentrations of conditional branches and non-linear memory access patterns. Most software is somewhere between those extremes.

DDR4 will be able to beat current memory in all applications after latencies come down to the same 9-10ns current DDR3 can do.
 

Damon Palovaara

Honorable
Oct 14, 2013
126
0
10,710
As high-bandwidth higher latency memory becomes more mainstream low-latency memory wont matter as much because games and software will be better coded to take advantage of the higher bandwidth. Actually games are already getting coded better to take better advantage of the higher bandwidth (Toms should do a new benchmark on it). Just remember we used to have 4-4-4-12 latency on our DDR2 modules
 

InvalidError

Titan
Moderator

The number of cycles is unimportant; what does matter is what they translate to in actual time. 2133-10-10-10 is faster than 800-4-4-4: 9.4ns for now-common DDR3 vs 10ns for your super-high-end DDR2.

Regardless of how low or high memory latency might be, any performance-critical bit of code (critical enough that it needs to be hand-tuned) that can take advantage of more linear memory access patterns will do so because that sort of coding style works better with caches and the CPUs' prefetching mechanisms anyway... there used to be a time where software developers were optimizing code around their target CPUs' cache sizes. Now, most developers simply trust the compiler and libraries to do a good-enough job.
 
Status
Not open for further replies.

TRENDING THREADS