IBM Beats Intel To 7nm Process Thanks To Silicon-Germanium Transistors, EUV Lithography

Status
Not open for further replies.

Onus

Titan
Moderator
Hmmm, I wonder how well they will tolerate heat; pure germanium is subject to thermal runaway.
7nm makes me wonder if someone sneezing in the next room might knock the equipment out of alignment...
 

shafe88

Distinguished
Jul 6, 2010
798
1
19,015
11
Would love to see an AMD apu built 7nm process. With 7nm process they'd be able reduce power consumption while increasing graphics performance and increasing the number of cores giving Intel a run for their money. How cool would it be to have 6 and 8 core apu's for desktops and mobile apu's that use half the power of current apu's resulting in longer battery life and cooler running.
 

jimmysmitty

Polypheme
Moderator
While this is interesting the viability is what we have to wonder about. Considering it is a first step it is probably 7nm SRAM, much like Intels 10nm was last seen to be. That is a great step but it doesn't tell us anything more than IBM has 7nm SRAM cells that act very differently to a massively more complex CPU.

I am wanting to know how they are handling, or how their new materials will handle, the core degradation issue Intel was talking about as the smaller nodes will suffer from more wear/tear due to the smaller traces. Maybe SiGe will handle it better, I know even Intel was looking into new materials beyond 10nm and SiGe was one of them.

I guess it all depends on when it launches and how well it performs and what Intels answer will be. I imagine Intel is already working 7nm and looking at other materials as well.
 

InvalidError

Titan
Moderator
Demonstrating working transistors on a new process is one thing, making billions of transistors coexist on that process to produce a chip is another.

Intel demonstrated ALUs working at 8-10GHz 10-12 years ago yet we still do not have practical CPUs operating at more than 5GHz today.

Not all demonstrations turn into practical and economically viable processes.
 

extide

Honorable
Jan 20, 2014
7
0
10,510
0
I wouldn't say they "beat" Intel just quite yet... I mean who knows, Intel may have plenty of working 7nm parts already. Anyways, for me the real 'winner' is the first one to have mass produced parts that are available for sale.
 

jimmysmitty

Polypheme
Moderator


Pretty much.
 

gangrel

Honorable
Jun 4, 2012
553
0
11,060
40
I think the economic viability is a big issue. Sounds like yields per hour, per etching station, will be low...and the cost of each station, high. Everything is harder...the wafer consistency presumably has to be that much better, for example...because the tolerances for irregularities are that much tighter.

And the other question is, is this even the right path to follow? Are there other solution paths that will be better for the medium term? There are parallels with hard drives; rather than increase platter density, they added more platters, and then they developed RAID.
 

usertests

Honorable
Mar 8, 2013
31
0
10,530
0
"Would love to see an AMD apu built 7nm process. With 7nm process they'd be able reduce power consumption while increasing graphics performance and increasing the number of cores giving Intel a run for their money."

AMD needs to get to 16/14nm first. You think Intel will just sit back? Everyone is waiting for EUV.
 

gangrel

Honorable
Jun 4, 2012
553
0
11,060
40
AMD's chips have had higher, to notably higher, power requirements compared to Intel. The smaller you go, the more this gets to be a problem. I think they're in a significantly inferior position to move down to this size. (That is, beyond the fact that they've been a generation or more behind Intel with regard to fabrication techniques.)
 

InvalidError

Titan
Moderator

IBM is sharing their process research with GloFo and Samsung, which means GloFo should be in a decent position to catch up with Intel a bit over the next few years. We can only hope Zen will deliver on the hype when AMD does gain access to 14nm next year.
 

husker

Distinguished
Oct 2, 2009
900
0
18,980
0
I love the graphic showing us that a bar 7 units long is approximately 3 times shorter than a bar 22 units long. What possible purpose can this serve? If this were a story about price drops, would they show us a picture of 7 bills next to a picture of 22 bills to help us understand?
 

usertests

Honorable
Mar 8, 2013
31
0
10,530
0


Folks would love to look at stacks of cash.

But yes, it is a picture for plebs.
 

gangrel

Honorable
Jun 4, 2012
553
0
11,060
40


While I'll buy that a 22 nm might cost more, I don't see it being *that* much more, in an environment where 22 nm fabrication was mature. You should see notable power consumption reductions and better yields per wafer (eventually) from the smaller size.

That said: I do think the early 7 nm stuff may carry a price premium on this order. In part, because of the significant technical issues, and in part because there isn't enough competition unless AMD can show the technical prowess to reduce to at least 14 nm. I do think we will see 7 nm...eventually....but the time lines may be optimistic.
 

jimmysmitty

Polypheme
Moderator




The 22nm process was developed by Intel for CPUs not GPUs. Processes are typically specialized for certain applications by the company, hence why NAND flash RAM is at a different size than CPUs and GPUs would have gone to 20nm not 22nm.

He was also talking about a 7nm GPU which right now the yields of a 7nm CPU/GPU are probably 0 since this is just SRAM not an actual CPU.
 

InvalidError

Titan
Moderator

Process geometry size and process specialization (RAM, GPU, CPU, NAND, etc.) are two separate issues. Intel has at least two different 22nm setups: one for desktop chips and one for laptop/SoC chips. The desktop one is tuned for high clock speeds while the mobile one is tuned for lowest possible power. IIRC, TSMC also has different fab setups for GPUs for high density and low leakage to accommodate the massive transistor counts at the expense of clock rates. DRAM processes are tweaked for extremely low leakage and high dielectric constant to minimize capacitor cell size. Etc.

The existence of a 20nm process used mostly in conjunction with RAM/NAND is purely coincidental.
 

jimmysmitty

Polypheme
Moderator


That is pretty much what I was getting at, that the 22nm process Intel has is not geared towards GPUs and the current 7nm IBM has is not geared towards anything yet nor do we know the viability of it yet.
 

gangrel

Honorable
Jun 4, 2012
553
0
11,060
40
Fine. AMD, show me your 14 nm CPU. I don't buy the CPU vs. GPU distinction. If you can do 14 nm CPU, you can do 14 nm GPU, it would seem. OK, GPUs have more transistors, but so many of them are duplicates. Regularity helps the the fabrication.
 

extide

Honorable
Jan 20, 2014
7
0
10,510
0


Well AMD doesnt manufacture chips anymore soooo your comment doesnt really make sense. They hire out to 3rd party fabs just like all of the other fabless companies. When a 14nm process is available in high quantities with high yields, we will see 16/14nm C/GPU's on the market.

 

palladin9479

Distinguished
Moderator
Jul 26, 2008
3,239
0
20,860
45


If there was low volume it would certainly cost that much. Actual production cost is often only a big factor in first generation productions, afterwards it's all about R&D and scaling costs. The 28nm node is very mature, meaning the software design libraries are well developed, the production stations have since amortized their capital costs, and the limits of the process are so well known that workarounds can be done with little risk. There is a huge preexisting industrial capacity to produce products on that node and thus supply shortages aren't a factor. So ultimately it's much cheaper to make products on the 28nm node them to try jumping to a new node before it's fully matured.

Now couple that with the fact that most profit isn't realized in super high end / high cost / low volume products but instead in mass selling of low cost / high volume products. When people don't want to spend more then $150~200 for a CPU, nor more then $200~300 for a GPU, it's really hard justifying a design jump that would result in people needing to spend $600~900 USD per GPU in order to break even on the costs of developing the technology to make those GPUs.

So now we have the entire industry waiting for a stable process that's cheap enough they can all jump on and spread the R&D costs around. Intel doesn't count because any developments they do, they keep close to their chests. Your really left with Intel on one side and everyone else, including IBM, on the other.
 
Status
Not open for further replies.

ASK THE COMMUNITY