Hybrid Memory Cube Consortium Establishes a Global Standard for 320 GB/s Memory

Status
Not open for further replies.
G

Guest

Guest
Imagine combining this with the technology volta is using to stack dram. I wonder how much bandwith you could get then.

Still, this is impressive, very much so. But I wonder what's going to hit the regular market first, this or DDR4?
 
As cool as this is (and it is pretty cool), memory speed is not a major issue for future computing. I am wondering if this tech will ever get cheap enough, or get enough support, so that it ever really gets used in anything. It seems like the regular updates of DDR standards, and the slow move to mainstream GDDR (good move Sony!) will be able to keep up with CPU's and GPU's processing capability moving forward.

Still, I am really stoked to see stacked architecture starting to get somewhere. To think a year ago everyone was talking about how it would be impossible, and now in this week alone there have been articles of 2 companies starting stacked implementation. Once you get power consumption and leakage down low enough, then heat becomes less of an issue so that you can stack at least a few layers and still get adequate heat dissipation. I can't wait to see what this kind of stacked electronics brings about! It is the holy grail for SOC style computing because you can fit more stuff in essentially the same footprint. It also acts as a way to get around the latency and timing issues involved with many core CPU designs because you can put your IO for a lot of cores in a physically closer area, which should open up the way for 20+ core designs. Have perhaps a traditional dual and quad core design for day to day work, and then something like knights corner for programs that are optimized for many-thread CPUs where all of the cores are tiny simple low power cores, but the sheer volume of them make for impressive compute capacity. Maybe that is where this new memory tech helps? Something where you are feeding information to tens or hundreds of cores rather than your normal 4-16 of them.
 
[citation][nom]athulajp[/nom]Imagine combining this with the technology volta is using to stack dram. I wonder how much bandwith you could get then. Still, this is impressive, very much so. But I wonder what's going to hit the regular market first, this or DDR4?[/citation]
DDR4 is due out this year and is already in production. We should start seeing consumer chips start supporting it with the release of Broadwell chips next year.
Personally I think DDR4 is going to have a short lifespan. We have finally hit a point where your average consumer can cram way more ram on their systems than they will ever practically need for the life of the system. I am not saying that we will never need 16GB-32GB of ram in a home or gaming computer... just that we will not need it within the useful life of today's equipment. With that in mind, I think it would make a ton more sense to go Sony's route with a central stock of super high speed memory (be it XDR or GDDR) which can be used by the system, iGPU, or GPU, and then have either no RAM or just a little bit of insanely fast ram as a cache on the actual units. 8GB of GDDR would cost a pretty penny to put on a computer system, but for enthusiasts it would be well worth the money, and the cost would go down if it became more commonly used.
I know there is practically 0 chance of that ever happening... but it is probably more likely than this new tech getting off the ground.
 

hannibal

Distinguished
It depend a lot of how much contact pins this new memory type demands. If this can achieve higher speeds with less contacts, it will become cheaper to produce and allso guite usefull in mobile environment where space is allways an issue.
 

dark_knight33

Distinguished
Aug 16, 2006
391
0
18,780
[citation][nom]vaughn2k[/nom]If desktop PCs would benefit more on this, then definitely desktop PCs won't die, there are still a lot of room for improvements![/citation]

The myth that the desktop PC will 'die' is nothing more than FUD. Long live the PC!
 

InvalidError

Titan
Moderator
[citation][nom]CaedenV[/nom]Personally I think DDR4 is going to have a short lifespan. We have finally hit a point where your average consumer can cram way more ram on their systems than they will ever practically need for the life of the system.[/citation]
The main advantage of DDR4 is clock speeds, not size. The main reason DDR4 "supports" twice the memory density as DDR3 did is mainly because the smallest DRAM size has doubled so size descriptions have been bumped up one notch.

With Broadwell's IGP promising 4-5X HD4000's performance, DDR4's ~3.2GT/s will be very much welcome.

[citation][nom]vaughn2k[/nom]If desktop PCs would benefit more on this, then definitely desktop PCs won't die, there are still a lot of room for improvements![/citation]
Stacked dies with (ultra) short range interconnects are better suited for eDRAM-like applications where the memory chip gets mounted on the same substrate as whatever it talks to... you could see future APU/GPUs with a few of those chips mounted directly on the CPU/GPU substrate for the frame buffer or possibly stacked with the CPU/GPU die itself.
 

Vorador2

Distinguished
Jun 26, 2007
472
12
18,785
I guess the main problem for this technology is yield. Such a complicated structure with transistors stacked in layers will be hard to reliably build with current technology.

Still, it would solve some of the problems that high speed memory development is facing now.

By they way, don't expect to use GDDR as main memory, because it suffers from high latency. There's a reason why it's not use for CPU anywhere.
 

blubbey

Distinguished
Jun 2, 2010
274
0
18,790
[citation][nom]wanderer11[/nom]Wouldn't 320 GB/s be 200 times DDR3 not 20 times? 320/20=16GB/s. DDR3 1600MHz is 1.6 GT/s.[/citation]
Isn't DDR3 bandwidth ~10/11GB/s?
 
[citation][nom]wanderer11[/nom]Wouldn't 320 GB/s be 200 times DDR3 not 20 times? 320/20=16GB/s. DDR3 1600MHz is 1.6 GT/s.[/citation]

GT/s is not a GB/s and i nthe case of GT/s on DDR3, that's GT/s per bit in the interface and the interface is 64 bits wide per channel. DDR3-1600, for example, has 12.8GB/s per channel. DDR3-2400, for example, is 19.2GB/s. With dual-channel memory being common and DDR3-1333 to DDR3-1600 being common, most modern CPUs have about 22GB/s to about 26GB/s of maximum theoretical bandwidth with realistic bandwidth then depending on the memory controller.
 
[citation][nom]InvalidError[/nom]Depends on how wide the interface is... 12.8Gbps per channel for 1600MT/s DIMMs; that's 51.2Gbps for a quad-channel CPU.[/citation]

12.8GB/s, not 12.8Gb/s, for DDR3-1600 per channel. A lot of memory companies and others down the supply chain and more etc. say Gb/s, but that's a huge mistake for memory in most contexts that it's used in.
 

InvalidError

Titan
Moderator
[citation][nom]blazorthon[/nom]12.8GB/s, not 12.8Gb/s, for DDR3-1600 per channel. A lot of memory companies and others down the supply chain and more etc. say Gb/s, but that's a huge mistake for memory in most contexts that it's used in.[/citation]
I obviously meant GB/s.

I don't know for you but for me, typing multiple capitals in a row requires extra concentration to override the single-capital reflex from normal writing. Slips by easily when tired.
 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
PC is an acronym for 'Personal Computer'.

Its more likely that the desktop is on its way out.
Technology is being reduced in size, and what you have in the market now is not a real reflection of out technological capabilities and latest scientific knowledge (far from it) - but, as prices of technology are reduced, newer technology is replacing old one faster... and therefore, the 'revisions' you see are appearing faster... albeit I would prefer they create quantum leaps on a regular basis (we certainly have the means and the know how to do it, but the monetary system prevents it).
 

rantoc

Distinguished
Dec 17, 2009
1,859
1
19,780
[citation][nom]Vorador2[/nom]By they way, don't expect to use GDDR as main memory, because it suffers from high latency. There's a reason why it's not use for CPU anywhere.[/citation]

Lies PS4 have it and its "perfect" according to Epic /end sarcasm
 

alextheblue

Distinguished
[citation][nom]CaedenV[/nom]It seems like the regular updates of DDR standards, and the slow move to mainstream GDDR (good move Sony!) will be able to keep up with CPU's and GPU's processing capability moving forward.[/citation]Sony wasn't the first to do this (shared GDDR for both CPU and GPU). Look at the Xbox 360. Gee I wonder where they got the idea from... Anyway, it turns out DDR3 provides enough bandwidth to keep conventional x86 CPU cores pretty well fed (especially at 1866+ speeds with reasonable buses), so I don't know if it is necessary to go all GDDR with Jaguar cores. But we'll see.
[citation][nom]CaedenV[/nom]Have perhaps a traditional dual and quad core design for day to day work, and then something like knights corner for programs that are optimized for many-thread CPUs where all of the cores are tiny simple low power cores, but the sheer volume of them make for impressive compute capacity. Maybe that is where this new memory tech helps? Something where you are feeding information to tens or hundreds of cores rather than your normal 4-16 of them.[/citation]What you've described is not really all that dissimilar to a modern GPU. For all but the rarest of computing cases, you don't need Knight's Corner. We don't have seperate x87 companion FPUs anymore. PhysX failed to really catch on as a dedicated chip. A CPU and a modern GPU are pretty much good enough for anything you would need to do, and they're both improving at a pace where we are usually waiting for software to catch up.

That's the REAL problem - software support. It's not the hardware. For example, getting software developers to support OpenCL or DirectCompute is like pulling teeth. A lot of them have really been dragging their feet. We need to tap into the hardware we've got before we decide to add any more dedicated chips to the mix.
 

InvalidError

Titan
Moderator
[citation][nom]alextheblue[/nom]Sony wasn't the first to do this (shared GDDR for both CPU and GPU). Look at the Xbox 360. Gee I wonder where they got the idea from...[/citation]
Another more recent example of x86 using GDDR5: Xeon Phi.

One of the major issues with using GDDR5 on PCs is that GDDR5 signaling is intended for fixed memory configurations soldered directly to the PCB. A good chunk of what enables GDDR5 to run 2-3X faster than DDR3 is the lack of CPU/GPU socket, DIMM slot interface and associated PCB between the GDDR5 dies and the CPU/GPU.

I doubt many enthusiasts would be willing to make that sacrifice (soldered CPU+RAM) to get GDDR5 in their gaming PC... but with Haswell/GT3, I think we are already sort-of going there. It may only be a matter of few more years before Intel decides to put enough eDRAM or equivalent on their CPUs to forgo conventional system RAM altogether for most applications.
 
Status
Not open for further replies.