IBM Teams Up With ARM for 14-nm Processing

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]dalauder[/nom]Then Carbon will allow us to go near 40 to 75GHz on a 5nm process.[/citation]

Its not so easy to increase the clock frequency.The power consumption / gate should be reduced in some other way.This is where adiabatic logic looks promising.
 
[citation][nom]jimmysmitty[/nom]The end says Intel but I am sure its IBM. And I don't see 14nm coming out soon. Intel will be pushing 22nm at the end of 2011, beginning of 2012. Then they will move to 18nm I believe is what they said was next.[/citation]
it's alot easier to shrink an ARM chip, first it has a lower voltage and frequency. second it has alot less transistors.
 
[citation][nom]Thor[/nom]I really don't understand wht IBM or Intel have not buy ARM again.Oh I know: Bigger mean slower to think. Not able to imagine the future little bit.Smile[/citation] I don't think Intel would be allowed to buy ARM even if it wanted it would create both anti trust and monopoly issues.

IBM would have a easier time accomplishing that and a better fit anyway as well which I could see happening I could also see Nvidia and ARM merging as well or all three.

In any case IBM just added another more firepower to ARMS weaponry on x86 further ensuring that ARM is the wave of the future.

Intel has got to be getting worried now between Microsoft, Nvidia, IBM, and ARM's design approach their stronghold regime might be overthrown sooner rather than later for them.
 
[citation][nom]hardcore_gamer[/nom]Its not so easy to increase the clock frequency.The power consumption / gate should be reduced in some other way.This is where adiabatic logic looks promising.[/citation]

I personally dont think we will be capable of creating a process that will result in zero loss of energy as heat. Close to zero yes but until we find a element which provides absolute zero Ohm's to an electrical current at room temperature or a temperature which can be reliably maintained in a PC, until then it simply isn't possible.

As for those clock speeds of 50+ Ghz, Imagine the amount of power that would have to be supplied to 12 cores running at 75Ghz! Thats a TDP of 5855W according to my logic (This is based on comparisons with current high end CPU's, namely the i series of Intel processors which are currently using a 32nm construction. therefore it isn't a accurate representation of what we will be looking at in the circumstance that this level of processing power does eventually come into play).
 


Can you tell me where exactly it says that? just curious?
 
9nm is the point at which you can just barely squeeze in enough atoms to make a transistor. But, that's with the materials we use now. I'm sure some big, awesome breakthrough is just right around the corner. . .
 
[citation][nom]thefog101[/nom] As for those clock speeds of 50+ Ghz, Imagine the amount of power that would have to be supplied to 12 cores running at 75Ghz! Thats a TDP of 5855W according to my logic (This is based on comparisons with current high end CPU's, namely the i series of Intel processors which are currently using a 32nm construction. therefore it isn't a accurate representation of what we will be looking at in the circumstance that this level of processing power does eventually come into play).[/citation]

If you look at Pentium 1 processors, they were using about 12 Watts. Now we use 130W regularly. That's roughly a performance increase of 300x+([4000MHz/150MHz]*[8threads/1thread]*[200%Per Cycle Efficiency]=426, which is conservative to assume only a 2000% per clock cycle efficiency increase). And this only had a 10x increase in energy usage due to a lot of things, but shrinking the manufacturing size greatly reduced energy requirements. So if we increase our performance by 60x ([50MHz/5MHz]*[12cores/4cores]*[200%]=60), it is reasonable to assume that our power consumption increase will be more on the order of 2x ([60xIncrease/300xIncrease]*[10xPower Consumption Increase]=2). So we're more talking about 280W Processors for super powerful hubs.

To be honest, I don't see every home having a CPU of this power--just like not everyone has a Core i7 at home and only half the people that have them need them (I don't need mine). If everyone has a CPU of half that power (like an i5), we'll be where we are today at 140W TDP processors.
 
I envision that it may become more cost effective to move to larger circuits using Gallium Arsenide (GaAs) or similar.

The gain in clock speed makes up for the increase in lithography fabrication size. It can switch about 100 times faster (250 GHz or so).

The ARM design has a slight advantage if they go in this direction.
 
Oh, and a better surface area contact ratio and/or a better contact area to volume ratio will assist with cooling.

It is not unreasonable to imagine heat-sink designs in home computers capable of cooling 140 watts TDP, as IBM are already doing 300 watts TDP in their mainframes for some processors!
 
I'm so tired about hearing about graphene replacing silicon. The graphene transistors don't have a distinct off and on. They'll replace parts but not all. Better suited for RF applications.

The long and short of it is we're hitting a wall this decade and more is going to be put towards consolidation and other processing advances, rather than just shrinking transistor size and tweaking materials (such as previous die shrinks, switch from aluminum to copper interconnects, hi-k, etc).
 
I'm so tired about hearing about graphene replacing silicon. The graphene transistors don't have a distinct off and on. They'll replace parts but not all. Better suited for RF applications.

The long and short of it is we're hitting a wall this decade and more is going to be put towards consolidation and other processing advances, rather than just shrinking transistor size and tweaking materials (such as previous die shrinks, switch from aluminum to copper interconnects, hi-k, etc).
 
I'm so tired about hearing about graphene replacing silicon. The graphene transistors don't have a distinct off and on. They'll replace parts but not all. Better suited for RF applications.

The long and short of it is we're hitting a wall this decade and more is going to be put towards consolidation and other processing advances, rather than just shrinking transistor size and tweaking materials (such as previous die shrinks, switch from aluminum to copper interconnects, low-k, etc).
 
Status
Not open for further replies.