Intel Currently Testing 14 nm Tri-Gate Transistors

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

loomis86

Distinguished
Dec 5, 2009
402
0
18,780
[citation][nom]theuniquegamer[/nom]45nm-32nm-22nm-14nm-10nm....????How small they can built a transister?because they can't go less than 2nm in current semiconductor material , less than 2nm they will scatter by the temperature. So they should consider molbdenite like material in which the transistor can be built up to some angstroms. (1 angstrom=0.1nm)[/citation]

10nm is expected to be the limit. smaller than that will probably require nanotubes and/or quantum computers.
 

loomis86

Distinguished
Dec 5, 2009
402
0
18,780
not without carbon nanotubes or nanowires. It will be the end of CMOS on silicon. Maybe they will get to 8 nm with silicon. maybe. The point is, we are getting close to the day when computers are going to have to change drastically if they are going to continue to shrink.
 

memadmax

Distinguished
Mar 25, 2011
2,492
0
19,960
Now that we are getting small enough to start hitting physical atomic barriers, Intel should start "layering" things inside the CPU, like a cake. Like have one layer for all the supporting circuitry, another layer for 20 cores, another for the gpu, etc etc....

I had this kind of Idea back when the 386's were around but was afraid to patent it =D
 
G

Guest

Guest
memadmax,

CMOS has been fabricated in a layer by layer fashion since its existence. In fact, the MOS stands for metal oxide semiconductor, which are the gate, insulating layer and channel of the transistor respectively.
 

willard

Distinguished
Nov 12, 2010
2,346
0
19,960
[citation][nom]de5_roy[/nom]if you read bliemer's words.... he never directly related '14 nm tri gate transistors' to 'lab testing'. he emphasized on a way, a device, not a 14 nm 'product'. he didn't specify anything else. even if intel was working on 14 nm cpus, they won't reveal much.my comment about ivb, intel has been consistent with their cpus' performance so far, but i am not believing anything until i see some thorough reviews.[/citation]
And if you read the article, you'll see nobody ever mentions these being in processors that are running, or performance gains. The article is simply about Intel already working on a 14nm process. Which is a fact. Which is not speculation.
 

madooo12

Distinguished
Dec 6, 2011
367
0
18,780
[citation][nom]Zanny[/nom]What? Monopolies are when there is a market leader that uses their market dominance to drive competition out of the market. Right now, Intel is anything BUT using its position to force adoption. ARM is making huge inroads into netbooks in the next few years, expect them on laptops as a commonplace occurrence when Windows 8 comes out. The fact Intel is dominating the desktop mainstream is because we sold our souls to them in the 90s. We said, hey x86, we will design EVERYTHING for you. And we did. So today, to build a processor that can run that instruction set, you must license it from Intel. That is the real burden of modern computing, it is not that Intel dominates the desktop / laptop market, it is that we have written all of our software for x86 when it isn't an open standard so you can't expect much competition in the hardware space when Intel controls who can use it.There is the flip side that Intel now has to license AMD64 instructions for 64 bit, but that is what is keeping AMD in the game. Even if they stop making processors, Intel is stuck licensing the AMD64 set from AMD because they used it at the 64 bit transition period and now it is too late to back on that.But Intel is not a monopoly, and it is not a trust. You need to start worrying if Intel starts using its mass market control of mid to high range processors to force certain programs to stop running on a case by case basis.[/citation]

great i think we must fix that problem by using an open standard architecture but the problem is that there are no open architectures because making one is costy, really costy

if there actually was one i think linux would support it and i would personally use it but there isn't so were waiting for a couple of genius engineers to make a new open architecture
 

gplnpsb

Distinguished
Dec 7, 2011
3
0
18,510
memadmax & guydudebroman

Indeed, modern CPUs have about ten metal layers in addition to a base silicon layer. See the second page of this intel document: ftp://download.intel.com/pressroom/kits/32nm/westmere/News_Fact_Sheet.pdf. Memadmax, I believe the closest concept to what you're thinking of is die staking, where multiple dies are stacked into a single package. This is done for NAND dies in SSDs, but I believe heat dissipation problems have prevented it from being used yet in higher power devices.
 

smartidiot89

Distinguished
Dec 7, 2011
1
0
18,510
[citation][nom]de5_roy[/nom]if you read bliemer's words.... he never directly related '14 nm tri gate transistors' to 'lab testing'. he emphasized on a way, a device, not a 14 nm 'product'. he didn't specify anything else. even if intel was working on 14 nm cpus, they won't reveal much.my comment about ivb, intel has been consistent with their cpus' performance so far, but i am not believing anything until i see some thorough reviews.[/citation]
Sounds like there needs to be some clearing up here.

Jacob Hugosson from NordicHardware here, I held the interview with Patrick Bliemer during Dreamhack Winter. He did say explicitly that the Tri-gate opened the way for 14nm, so the process will be using Intels new Tri-gate technology. If you don't believe the Managing Director of Intel Northern Europe, I am not sure who to trust ;) Considering how the semiconductor industry works and how long these things take to research and develop it shouldn't come to a surprise Intel are already playing with 14nm.

Cheers
 

kronos_cornelius

Distinguished
Nov 4, 2009
365
1
18,780
Intel is going to have to speedup that nanometter pipeline to keep up with the better design of the Fussion and Tegra chips. I read some interesting news about supercomputer labs planning to use Fussion chips because of their low latency use of GPU and their access to the system memory. Meanwhile, you have the Tegra 3 showing off its great graphics on the Asus Transformer 2. Intel is like Chrysler taking about SUB's while AMD and Nvidia are like Toyota, investing on the hybrid(GPU) and battery(ARM) cars of the future. Don't forget that other companies have access to advance printing techniques (IBM), so Intel needs to be careful not to end up like Apple (just a big marketing machine).
 

loomis86

Distinguished
Dec 5, 2009
402
0
18,780
[citation][nom]guydudebroman[/nom]memadmax,CMOS has been fabricated in a layer by layer fashion since its existence. In fact, the MOS stands for metal oxide semiconductor, which are the gate, insulating layer and channel of the transistor respectively.[/citation]

obviously memadmax was talking about multiple layers of curcuits, ya dolt. That hasn't been done until recently when IBM announced the memory cube.
 

loomis86

Distinguished
Dec 5, 2009
402
0
18,780
[citation][nom]gplnpsb[/nom]memadmax & guydudebromanIndeed, modern CPUs have about ten metal layers in addition to a base silicon layer. See the second page of this intel document: ftp://download.intel.com/pressroo [...] Sheet.pdf. Memadmax, I believe the closest concept to what you're thinking of is die staking, where multiple dies are stacked into a single package. This is done for NAND dies in SSDs, but I believe heat dissipation problems have prevented it from being used yet in higher power devices.[/citation]

I've been saying for years they need to start utilizing the peltier effect in CPUs. If they put a peltier layer in between each CPU "stack" they might be able to solve the cooling problem.
 
Man, and I thought it was amazing to go from my 65nm CPU to my new 32nm one, and was amazed at just how quickly that happened (5 years). I cannot wait for them to hit the practical limit of 8-10nm, then they will finally be forced to rethink the x86 architecture and innovate on that end.

As for those who are looking forward to layered/stacked processors; dont hold your breath. It is very difficult to line up 2 boards so perfectly to do that without error, and then it makes it a royal pain to test the thing. Not to mention the heat dissipation issues that would result! Not saying we wont see it someday (because eventually they will be forced to), but it really makes things much more difficult and expensive than it needs to be.
 

sykozis

Distinguished
Dec 17, 2008
1,759
5
19,865
[citation][nom]kronos_cornelius[/nom]Intel is going to have to speedup that nanometter pipeline to keep up with the better design of the Fussion and Tegra chips. I read some interesting news about supercomputer labs planning to use Fussion chips because of their low latency use of GPU and their access to the system memory. Meanwhile, you have the Tegra 3 showing off its great graphics on the Asus Transformer 2. Intel is like Chrysler taking about SUB's while AMD and Nvidia are like Toyota, investing on the hybrid(GPU) and battery(ARM) cars of the future. Don't forget that other companies have access to advance printing techniques (IBM), so Intel needs to be careful not to end up like Apple (just a big marketing machine).[/citation]

Intel is more like General Motors.... Chrysler hasn't actual done anything to advance the automotive industry since the 60's. Intel is constantly trying to advance the tech industry. Also, Intel could never end up like Apple. Intel actually does research and development of new technologies. Apple just files for "proof of concept" patents and sues everyone while selling devices made in sweat shops.
 

maestintaolius

Distinguished
Jul 16, 2009
719
0
18,980
[citation][nom]loomis86[/nom]10nm is expected to be the limit. smaller than that will probably require nanotubes and/or quantum computers.[/citation]
It gets hard to say what is the actual limit, because it keeps changing as materials science keeps evolving. I've read over the years of 'quantum' limits to semiconductors because after a while you get so small electron tunneling becomes significant and at a certain size the quantum mechanical properties that make a semiconductor a semiconductor no longer apply. I remember white papers a decade or so ago that 30-40 nm was the limit, then it became 15-20, now it's somewhere around 8-10 as breakthroughs keep changing the theorized limit. It is amazing to think though that the wavelength of visible light, by comparison, is huge compared the sizes of the current transistors.
 

loomis86

Distinguished
Dec 5, 2009
402
0
18,780
True. They are guessing at what the limit will be. The limit really does exist for CMOS on silicon. The closer we get to the true limit, the more accurate their guesses are at what the true limit will be.
 

tecmo34

Administrator
Moderator
[citation][nom]smartidiot89[/nom]Sounds like there needs to be some clearing up here.Jacob Hugosson from NordicHardware here, I held the interview with Patrick Bliemer during Dreamhack Winter. He did say explicitly that the Tri-gate opened the way for 14nm, so the process will be using Intels new Tri-gate technology. If you don't believe the Managing Director of Intel Northern Europe, I am not sure who to trust Considering how the semiconductor industry works and how long these things take to research and develop it shouldn't come to a surprise Intel are already playing with 14nm.Cheers[/citation]Thanks for the additional feedback and conducting the interview with Patrick to start with...

Best Regards,

Doug
 
G

Guest

Guest
i love how when an Intel employee opens his mouth, the entire media can march in lockstep to spin it into a great victory even when it is not.

That bloke did not say that they've taped out 14nm SRAM test wafers. He only said that they think they might sort of have a way of doing it in their heads. There are no 14nm chips. There are no 14nm processors being tested. There is only engineers who think they've got an idea of how to accomplish their next goal. Had they taped out an SRAM wafer on 14nm, they would've made an official announcement, just like they did for the last 20 nodes.

Contrast this with the reaction AMD gets when they say or do anything, it's always doom and gloom.
 

cschuele

Distinguished
Nov 30, 2009
67
0
18,640
[citation][nom]GreaseMonkey_62[/nom]I've always been an AMD fan, but with Intel's tri-gate transistor, ever shrinking architecture and Bulldozer fail, I'm starting to lean towards Intel for my next build.[/citation]
due to physics you can't get past 1-2nm before the gates are so small electrons leak out and dont function correctly. What started 40 years ago is now almost at the limits of physics. Soon there will be no reason to upgrade other added cores and new architecture. Does anyone know the gate sizes on the first processors?
 

alchemy69

Distinguished
Jul 4, 2008
211
9
18,685
I'd be wary when talking about lower size limits for transistors. I remember well reading in some prestigious technical magazines some 12 years ago how 0.1μm was as low as photolithography could go.
 

Geef

Distinguished
MORE INFO:
Me: Hey dude check it out, it's your latest 22nm i7 I'm using.
Intel Engineer: Hmm ... my grandfather was in that design team. He died before I was born, but he passed his knowledge on to My dad... My dad designed this 14nm chip as his final project in Intel College, maybe you'l see that in a few years. Meanwhile, check out this cool mobile 8 nm chip, runs circles around your i7. Your children will be fortunate enough to use this, but they will call all processors, DOTS since thats all anyone can see if they happen to be looking at one of the chips made from 8nm...
 
Status
Not open for further replies.