Globalfoundries Accelerates Roadmap: 14nm Chips in 2014

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
Exactly hector and what blazorthorn surprisingly doesn't know is that they DO have two teams working on die shrinks. They don't start 14nm after getting 22nm out. They started 14nm 2 years ago to be ready in 2014. Intel has built many fabs and has the ability to do simultaneous development. It just isn't always so easy as just shrinking the die. There are all sorts of design issues that have to be addressed with every die shrink. Materials science problems often arise in these die shrinks amongst others. Intel, in fact, according to them, are already researching nodes beyond 14nm in their labs for 2016 and beyond. My guess is that they are pretty much finishe with the 14nm process and are producing experimental DRAM modules or even nand modules amongst others to ensure the process is working correctly before they put Broadwell into full production in 2013.

I'm not so sure GF is anywhere near that close since just a year ago they decided to go gate last and are recently (as in 2 years ago) decided to go FinFET (even if it's planar) with the 14nm shrink. They have serious obstacles to overcome in my opinion. I still think 2014 is very optimistic but I will wait and see.
 
[citation][nom]TheTruthIsss[/nom]Exactly hector and what blazorthorn surprisingly doesn't know is that they DO have two teams working on die shrinks. They don't start 14nm after getting 22nm out. They started 14nm 2 years ago to be ready in 2014. Intel has built many fabs and has the ability to do simultaneous development. It just isn't always so easy as just shrinking the die. There are all sorts of design issues that have to be addressed with every die shrink. Materials science problems often arise in these die shrinks amongst others. Intel, in fact, according to them, are already researching nodes beyond 14nm in their labs for 2016 and beyond. My guess is that they are pretty much finishe with the 14nm process and are producing experimental DRAM modules or even nand modules amongst others to ensure the process is working correctly before they put Broadwell into full production in 2013. I'm not so sure GF is anywhere near that close since just a year ago they decided to go gate last and are recently (as in 2 years ago) decided to go FinFET (even if it's planar) with the 14nm shrink. They have serious obstacles to overcome in my opinion. I still think 2014 is very optimistic but I will wait and see.[/citation]

It doesn't matter if they already have two teams because they could have three or four. It doesn't matter if they start one node's progress before the previous node is out. They could start it earlier and have three teams or even four.

[citation][nom]TeraMedia[/nom]@blazorthon: The reason Intel "waits" is because at each step, they identify a bunch of new problems that they need to address to make that step a success. Those same problems usually also apply to the subsequent steps too (plus the additional problems that those yet-smaller steps face). So if you have team 22 working on 22nm R&D, much of the team might get stuck in a holding pattern while a small research group including one or two members solve some tricky problem. But if you also have team 14 working in parallel, then you have all of teams 22 and 14 waiting for that same small research group. Repeat this throughout the R&D cycle for 22, and you end up wasting a lot of team 14's time... and, they still have to face the issues that they weren't experiencing at 22 nm but crop up at the smaller scale. You've probably heard the saying, "Nine women can't deliver a baby in one month."[/citation]

That doesn't matter. Some time may be wasted, but that's to be expected. it would still improve time frames. If Intel really wanted too, they could throw out a die shrink every year. Nine women can't deliver a baby in one month, but if they were pregnant one month after the other, about nine months after the first one got pregnant, they could start having babies out in roughly one month intervals (might go two months without a child in one two month stretch or even have two babies in the same month, but it'd probably average out to about one per month). Your example doesn't disprove what I'm saying.
 
G

Guest

Guest
Nothing you said makes much sense Blazor because babies are a 9 to 10 month process (just about every time and almost never longer), but R&D for making microprocessors has no set time table (because research is the time limiting factor and development hits unforseen snags). Thus, your example is irrelevant and grossly non sequitur. It is highly unlikely that Intel could make a new node every year for all of their products. There is simply too much that goes into it. There are unforseen issues that can arise (as has so many times in the past for previous die shrinks). Many new technologies or possibl materials will likely go into each die shrink. Getting them out every 2 years will be difficult as has been seen with the 22nm process. Intel is finally at a full product portfolio for IvyBridge... nearly 3 years out from the previous die shrink. 14nm could very well be just as difficult and we know Intel has been working on theirs for much longer than GF. GF announcing 14nm for 2014 just sounds really fishy to me. I wonder if that is more marketing than truth (I'm sure investors liked the news as will suppliers but al may dump them like a rock in a couple of years with no delivery).
 
[citation][nom]TheTruthIz[/nom]Nothing you said makes much sense Blazor because babies are a 9 to 10 month process (just about every time and almost never longer), but R&D for making microprocessors has no set time table (because research is the time limiting factor and development hits unforseen snags). Thus, your example is irrelevant and grossly non sequitur. It is highly unlikely that Intel could make a new node every year for all of their products. There is simply too much that goes into it. There are unforseen issues that can arise (as has so many times in the past for previous die shrinks). Many new technologies or possibl materials will likely go into each die shrink. Getting them out every 2 years will be difficult as has been seen with the 22nm process. Intel is finally at a full product portfolio for IvyBridge... nearly 3 years out from the previous die shrink. 14nm could very well be just as difficult and we know Intel has been working on theirs for much longer than GF. GF announcing 14nm for 2014 just sounds really fishy to me. I wonder if that is more marketing than truth (I'm sure investors liked the news as will suppliers but al may dump them like a rock in a couple of years with no delivery).[/citation]

There's a difference between me not making sense and you not understanding. If Intel started each process' work another year or two earlier than they do now and threw in a third or fourth team (maybe more, w/e) working on a third or fourth process simultaneously, they'd have more time per process to work (more time to iron out any issues) on and more processes being worked on at the same time.

I'm not arguing that GF will succeed in their goals, only that it is possible. They may have been working on these with Samsung for a few years already and if that's the case, they could very well succeed
 
G

Guest

Guest
If they had 100 teams could they die shrink 100 times per year? Intel IS working on much further-out R&D than 14nm. They don't just magically will this stuff into existence. Some research solutions might generalize well to future processors, which means it'll halt all future nodes until it can get figured out, like overcoming quantum tunneling for instance. Others will probably just allow a few nodes to be achieved, like tri-gate on silicon. They're targeting multiple design nodes well in advance (they're already talking about 5nm, which is 8 years out according to them), and likely have very specific sets of problems they're looking at for each node. I imagine it's as parallelized as humanly possible. The stuff coming out today is a culmination of years of directed research. Even the fab for the next process has to be built while the current one is coming out. They can't just fabricate new tech on old equipment. Just like how in processors themselves you can only look ahead so many instructions before you're just wasting time.

I trust Intel has a near-optimal strategy for the research logistics of improving their technology. If there was an obvious improvement to their research process, I'm sure they'd have thought of it by now.
 
[citation][nom]89mm[/nom]If they had 100 teams could they die shrink 100 times per year? Intel IS working on much further-out R&D than 14nm. They don't just magically will this stuff into existence. Some research solutions might generalize well to future processors, which means it'll halt all future nodes until it can get figured out, like overcoming quantum tunneling for instance. Others will probably just allow a few nodes to be achieved, like tri-gate on silicon. They're targeting multiple design nodes well in advance (they're already talking about 5nm, which is 8 years out according to them), and likely have very specific sets of problems they're looking at for each node. I imagine it's as parallelized as humanly possible. The stuff coming out today is a culmination of years of directed research. Even the fab for the next process has to be built while the current one is coming out. They can't just fabricate new tech on old equipment. Just like how in processors themselves you can only look ahead so many instructions before you're just wasting time.I trust Intel has a near-optimal strategy for the research logistics of improving their technology. If there was an obvious improvement to their research process, I'm sure they'd have thought of it by now.[/citation]

You're making assumptions and passing them off as facts at that point. Besides, If Intel makes money this way, the only way that they'd spend more on R&D is if it meant more money coming in. Having die shrinks more often would probably not do that because they wouldn't get to capitalize on each node, so it wouldn't make sense. We can both make assumptions all we want to, but that doesn't prove either of us right nor wrong. Your reasoning, unfortunately, doesn't make as much sense because you seem to be confusing what can be done for the consumer with what will be done for the business. They make more money this way. They almost certainly don't do it out of an inability to do better.

Regardless, I didn't say that they wouldn't need new fabs, I didn't say that they aren't already working on multiple nodes at once, or anything like that. I'm saying that they can do better if they wanted to and nothing that anyone has said has shown otherwise (quite the opposite, in fact). You're acting as if I've said a wide variety of things that I didn't and then you try to refute them as the ignorant crap that they are, but they're your words that you're trying to make into mine, not really mine.
 

hector2

Distinguished
Mar 18, 2011
61
0
18,630
[citation][nom]blazorthon[/nom]Global Foundries seems to have been improving lately.[/citation]
Suddenly they're "improving" after showing some PowerPoint foils ? I'm about to put out my own roadmap. I have some extra space in my garage where I'm planning to produce 10nm wafers at the end of 2013. I'm signing up investors now and will be taking orders next month. Better hurry & signup before my planned capacity is all sold out !!
 
[citation][nom]hector2[/nom]Suddenly they're "improving" after showing some PowerPoint foils ? I'm about to put out my own roadmap. I have some extra space in my garage where I'm planning to produce 10nm wafers at the end of 2013. I'm signing up investors now and will be taking orders next month. Better hurry & signup before my planned capacity is all sold out !![/citation]

I didn't make that statement based on this article. They've been working with Samsung and such lately and the results seem to be obvious. They seem to have improved yields on existing processes greatly and have improved in other ways too.
 

mamailo

Distinguished
Oct 13, 2011
166
0
18,690
[citation][nom]jupiter optimus maximus[/nom]Amazing that we were at 65nm 4 years ago and now we are going into the 14nm territory in two years. So how close to quantum computing are we now?[/citation]

Physicists have observed quantum behaviour in a macroscopic objects big enough to be seen with the naked eye.Quantum mechanics is not just about small particles. It applies to things of all sizes.
The largest object who behave in a quantum phenomena correctly quantified is a 40nm glass sphere interacting with a lasser.

During 14nm transistor node research it was found that silicon, cannot form stable structures below 10 nanometres in size.An engineering barrier.
The electron carrier dynamics effects the could be observed down to 5nm, probably a materials barrier.
And at 1 nm the quantum tunnel phenomena makes the transistor effect unreliable, a technology limit.

[citation][nom]ojas[/nom]Yeah i guess that's why Intel waits at least two years before the next shrink...[/citation]

Intel will do not do another die shirk because of business decision. AMD is not longer in the race (according to the CEO) and ARM competition is about energy efficiency not raw compute power.They will milk the node until the market put some pressure again
 
Status
Not open for further replies.