Now this is exciting news! Back in the '90s when die shrinks were becoming a popular way to bring performance improvements everyone was talking about how there is a limit to how small you can really go (back at the time they thought the limit was ~20nm or so). But the thought was that after the end of the die shrinks then we would either have to make changes by building 3D structures, layered structures, or maybe abandon the way we view tech and move to 'something else'. I was a kid at the time, and so now as an adult I have a strange fascination about this time we are entering where there is less and less importance to die shrinks anymore.
I think that for storage this is going to be a huge thing, especially for portable storage so that we can hopefully start getting 240GB+ of storage on phones and 1TB+ on tablets. But CPUs and GPUs are going to have some serious challenges moving to this kind of tech. Not only are the structures a whole lot more complicated, but they also produce a bunch of heat. I think that Intel was really hoping that things like Knight's Corner and the 'many core' designs would take off because each individual core would be much simpler, give off much less heat, and workloads could be distributed in a way so that once a core got warm then the load could move to a different core so that the entire unit would stay relatively cool. If they could accomplish that then each core could fit perpendicular to a back-plane, making for an insanely dense processing solution. But the fact of the matter is that normal workloads only take 1-2 cores and rarely up to 4, so this many core design does not work for most end users. To have a truly 3D CPU or GPU on current technology would just put out way too much heat to be practical yet, but I am sure they will think of something before too long. I mean, they really have to.