As cool as this is (and it is pretty cool), memory speed is not a major issue for future computing. I am wondering if this tech will ever get cheap enough, or get enough support, so that it ever really gets used in anything. It seems like the regular updates of DDR standards, and the slow move to mainstream GDDR (good move Sony!) will be able to keep up with CPU's and GPU's processing capability moving forward.
Still, I am really stoked to see stacked architecture starting to get somewhere. To think a year ago everyone was talking about how it would be impossible, and now in this week alone there have been articles of 2 companies starting stacked implementation. Once you get power consumption and leakage down low enough, then heat becomes less of an issue so that you can stack at least a few layers and still get adequate heat dissipation. I can't wait to see what this kind of stacked electronics brings about! It is the holy grail for SOC style computing because you can fit more stuff in essentially the same footprint. It also acts as a way to get around the latency and timing issues involved with many core CPU designs because you can put your IO for a lot of cores in a physically closer area, which should open up the way for 20+ core designs. Have perhaps a traditional dual and quad core design for day to day work, and then something like knights corner for programs that are optimized for many-thread CPUs where all of the cores are tiny simple low power cores, but the sheer volume of them make for impressive compute capacity. Maybe that is where this new memory tech helps? Something where you are feeding information to tens or hundreds of cores rather than your normal 4-16 of them.