Scientists Are Stacking Processor Cores on Top of Each Other

Status
Not open for further replies.

americanbrian

Distinguished
My only concern is that the heating effects will be increasingly difficult to manage. Clocking down the chips may be the only way to get them to not overheat, which will still be ok I guess but you lose single threaded speed for presumably higher core counts.

We already see that this approach has been a mixed success on our desktops. It is kind of exciting though. It is like the Terminator 2 brain chip.
 

Dangi

Honorable
Mar 30, 2012
192
0
10,690
this is the natural upgrade path to improve processors. increasing it's area isn't effective due to the long distances that comes with a bigger chip.


Heat may no be a real problem, IBM some time ago design a new system to cool chips by creating nanotubes inside the cheap and running water throught it, creating a "true" watercooling system.

More info abouts this nanotubes in the following link
http://www-03.ibm.com/press/us/en/pressrelease/32049.wss
 

joytech22

Distinguished
Jun 4, 2008
1,687
0
19,810
Well cooling would be a HUGE problem wouldn't it? Unless they use the outer edge of the cores as the thermal dissipation thing, and dissipate heat through a huge plate covering all four edges, which further extends to a larger heatsink of some sort?

Or did I just solve the problem for them..?
I don't know.. eh
 
How about making a sealed chamber around the stacked chips with a non conductive fluid/ mineral oil passing through all of the stacked chips with as you said, small heat sinks on the outer four edges. and pump it around.
 

tomsreader

Honorable
May 1, 2012
25
0
10,540
what about the cooling? if it's stacked then the lower core will get higher temperatures compared to the one closer to the IHS
 

Dragoon21b

Honorable
Mar 11, 2012
17
0
10,510
If we start talking about 3D processors we have to consider the best model already available...The Brain. all it is is a collection of transistors and pathways that run a very low voltage and are suspended in a non-conductive goo...Just one more small step toward Star Trek.
 

frombehind

Honorable
Feb 18, 2012
351
0
10,810
so if ivy bridge has major heat woes with side-by-side cores... how is stacking them like hotcakes (pun inteded xD ) going to make them generate LESS heat?
 
The 3D design need more of a visual architectural design. Something like a processor sized vent with ribs. Copper heat pipes could be added to stiffen the design to move heat to the primary heat sink customers place on top. Air could also be blown though this design for cooling. Imagine if the processor were like old style heat sinks with one flat layer and several vertical. This would require each package to connect much like the old Q6600 did back in 2006.

________
| | | |
| | | |
-----------

A six layer stacked CPU could not possible vent enough heat to compete in the consumer space.
 

serendipiti

Distinguished
Aug 9, 2010
152
0
18,680
[citation][nom]elcentral[/nom]so twice the power twice the heat.[/citation]
But with the same size... so twice the heat "density". The only real benefit is shorter conexion paths=higher speeds.
It is an useful development, but probably the tecnology will be adopted in an hybrid approach: only parts of the core will be layered while some structures will remain the same...
 
G

Guest

Guest
Good bring in the 100GHZ cpu's. This with an internal cpu peltier cooling and a special heatsink.
 

jonyah

Distinguished
Sep 18, 2009
43
0
18,530
Glad I'm not the only one here to see the main issue of cooling with these. Surface area is king when it comes to heat dissipation. These chips may be able to be made, but can they actually run?
 

TeraMedia

Distinguished
Jan 26, 2006
904
1
18,990
Why not stack a die with CPU cores on top of cache and IO dies? On the latest/greatest XEON, L3 cache is 20 MB, which, at > 6 transistors per bit translates to almost 1 billion transistors - before you've even gotten around to putting any logic on the chip. If you put the L3 cache below the CPU cores rather than next to them, you can do three things:
1) reduce the size per die. Chip-killing flaw rates go up with die size, so having smaller dies translates to fewer rejects and cheaper chips.
2) reduce the distance from CPU core to L3 cache. If the maximum signal distance for one monolithic die is x, the width (or length) of the die, then the maximum signal distance for a two-die chip sandwich would be perhaps x/2 + z, where z is the vertical length of the vias between dies. Shorter distances translates to lower latency, which can mean the difference between a 3770K and a BD.
3) modularize and/or increase L3 cache size

Next, you could put the IO module at the bottom (assuming it isn't a heat-generator... which admittedly it could be). All said-and-done, you could employ much smaller dies to produce much more powerful CPUs and APUs. Put 512 MB of VRAM on some dies near the bottom of the chip, and you turn a crippled iGPU into a viable one.

Heat generation is one problem. But heat stress might be an even larger one. You have to make it so that the entire chip warms up and cools down evenly and at the same rate, so that you don't get huge shear stresses on those copper vias. Otherwise, they'll just crack and disconnect after a few cycles.

I think stacked CPU cores is not a good idea, given the current core densities achieved by Intel and AMD. But I think stacked dies with differentiated purposes could make a lot of sense, if the signal architecture of those vias is done right.
 
About cooling - just because they are stacked doesn't mean they are vertical. Imagine a stack of watch batteries. Of course you wouldn't put a "cooler" on top of the stack, you'd flip the stack on it's side and cool it lengthwise.
 
[citation][nom]TheZoolooMaster[/nom]This sounds a lot like the 3D microchips that Kurzweil said would be the next step in sticking true to Moore's law.[/citation]
I am 100% with you! It's been a long time coming, but Kurzweil was one of the first to say that is was not only possible, but necessary. Can't wait to see it happen![citation][nom]elcentral[/nom]so twice the power twice the heat.[/citation]
actually no. Twice the heat dissipation issue would not be cumulative, it would be exponential. The top core would stay plenty cool, but the bottom or middle cores would be insulated AND have active heat sources around them. This means heat issues in the 3-10 fold range, not a mere doubling.
[citation][nom]Dangi[/nom]Heat may no be a real problem, IBM some time ago design a new system to cool chips by creating nanotubes inside the cheap and running water throught it[/citation]
That is exactly what the article proposes.
[citation][nom]frombehind[/nom]so if ivy bridge has major heat woes with side-by-side cores... how is stacking them like hotcakes (pun inteded xD ) going to make them generate LESS heat?[/citation]
IB does not have a heat issue. At stock voltages it runs as cool or cooler than SB, while having a ~5% performance increase. The heat issues arrise when OCing the chips, which is quite frankly not a problem for 99% of the world. IB does not have a heat issue, it has a OC issue, and it is due to the new die process, and 3D gates, and as these new processes mature and we can lower the voltage on chips of this size then it will become less of an issue.
[citation][nom]spp85[/nom]Stacking Processor Cores is a good idea, but how they cool the lowest cores ? Heat dissipation would be the biggest problem...[/citation]
That is the several billion $$ question, and there are several ways to do this.
1) turbo boost the top core for best single core performance, and then have TONS of smaller cores throughout the rest of the chip. So instead of having architecture akin to what we are use to with 2-8 traditional cores which can each do anything, we would see something more like the Cell processor with 1 'command' core, followed by a bunch of either specialized cores, or even more homogeneous cores that are more similar to an ARM architecture.
2) You can have a fair amount of physical distance between layers, and still have an advantage over a flat chip for signal processing. Thermal pipes or vents could still be crammed between layers for heat dissipation. Joytech22 mentioned the idea of changing the heat sink contact from the top only, to being on the top and edges. I would submit an idea of putting the pin-outs on the edges and having heat spreaders on what has traditionally been the top and bottom.
3) Engineers ultimately need lithography and design tools that allow them to get away from 2D chip design. The real advantage to 3D chips is not in layering cores on top of each other, but making each core a 3D structure so that the inputs and outputs of individual processing 'routes' are closer together, and then making the physical distance of these 'routes' shorter. Less distance, means less resistance, means lower voltage, means less amperage, means less heat. These also mean less latency and faster processing time for instructions (and also less material used). However, this is still in the science fiction realm as the tools to do it are not really available yet, but this is the direction we will have to take in time.
4) The material we are using for chips today (silicon) is not the best at heat dispersal, it was merely readily available at the time, and we have a whole eco system built around it which helps keep the cost down. Moving to other materials (graphene, molybdenite, even diamond) will go a long way towards better heat dispersal. Silicone is a heat insulator. Moving to something that conducts heat without a large amount of electricity would allow for much better cooling. Even more interesting is the idea of having the chip made out of a conductive material, and then insulating the routes. IBM has already done this with a copper processor ~20 years ago. It was just a research project to see if it could be done, but was not seriously pursued because it was not practical. Still, it is just one of many ideas of how to make 'cooler' chips that could be stacked.
5) Stacking everything in layers is not necessarily the best design. In a 3D space where heat is of concern a better approach may be more of a radial hub and spoke aproach, or constructing a chip with a backplane and fins like we do in our computer with a motherboard and daughter cards (a backplane controller and connector interface with specialty processors at right angles to it). Imagine the control unit (with integrated northbridge and cache... because lets face it, the north-bridge is going to disappear soon) on the back-plane, and then having processing cores perpendicular to it, or radiating outwards from it.
6) moving away from electrical processors and moving towards light based processing. Very cool, less heat, less power, and super awesome (aka, I have no idea how it works :p )


anywho, sorry for rambling. This is something I have dreamed about sense I was a kid, and it is super exciting to see it start to come into reality! It will be very interesting to see what comes out of the market to solve these interesting space and heat issues. So exciting!
 

lamorpa

Distinguished
Apr 30, 2008
1,195
0
19,280
[citation][nom]mayankleoboy1[/nom]waiting for the time when these "still academic, not ready for production" technologies do get ready for mass-production....[/citation]
The part you are wondering about is the development and commercialization time which is an inherent part of the process of academically-based research being turned into viable product. Yes, you hit the nail on the head: This is the time you are waiting for, since (even just by definition), development takes time. Does this explain it sufficiently?
 
Status
Not open for further replies.