Update: Intel testing 32nm, 22nm 2015?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Not really. An atom is measured in pm. Silicon has an atomic radius of around 100 pm which means that a single silicon atom is about 0.1 nm in diameter. Subatomic particles like protons, neutrons, and electrons diameters are measured in fermi's or fm which is about 1 million times smaller than a nm. So what you would see with 22 nm process is the transitors would be around 200 atoms wide give or take which is still outside the range of Mr. Heisenberg showing his face.

200 atoms? Damn! Given the fact that I'm still not fully convinced that anyone on Earth fully understands what an atom is, that is a pretty thin slice of stuff to make your transistors out of. Hell, I still don't get the "electron is a particle and wave" thing, so I think that electron tunnelling is well beyond my grasp and the only thing I understand about Schrödinger equations are that his cat is half-dead. Still, if I have to entrust my life's work that's on my hard drive to something that's not really at any particular point at any particular time, I'm gonna resort to papyrus!

Making processors out of tofu has got to be the answer.

Yeah, but then you get into the myriad problems of nigari lithography and then you'll have the Silken Fanboys spreading FUD about the Firm... :twisted:
 
Not really. An atom is measured in pm. Silicon has an atomic radius of around 100 pm which means that a single silicon atom is about 0.1 nm in diameter. Subatomic particles like protons, neutrons, and electrons diameters are measured in fermi's or fm which is about 1 million times smaller than a nm. So what you would see with 22 nm process is the transitors would be around 200 atoms wide give or take which is still outside the range of Mr. Heisenberg showing his face.

200 atoms? Damn! Given the fact that I'm still not fully convinced that anyone on Earth fully understands what an atom is, that is a pretty thin slice of stuff to make your transistors out of. Hell, I still don't get the "electron is a particle and wave" thing, so I think that electron tunnelling is well beyond my grasp and the only thing I understand about Schrödinger equations are that his cat is half-dead. Still, if I have to entrust my life's work that's on my hard drive to something that's not really at any particular point at any particular time, I'm gonna resort to papyrus!

The biggest problem they're going to have with going to 22nm is with the lithography. They're getting really close to using X-ray lasers to do the lithography (10 nm to 0.1 nm wavelength). So, while we're not at the limit just yet, we're starting to get close. This is probably the reason they're starting to look at tri-gates instead of the standard binary gate to increase computing power.
 
Trigate. Now that makes a whole lot more sense. I'm only concerned about the fact that it's a concept that's been around for decades but nobody seems to have made it work in a production environment. But at least a trigate doesn't rely on ever-decreasing dimensions down into infinity!
 
They're getting really close to using X-ray lasers to do the lithography (10 nm to 0.1 nm wavelength). So, while we're not at the limit just yet, we're starting to get close.
Getting really close? Just to put this in perspective, the wavelength of yellow light is around 570 nm. I don't know what they're using for lithography now, but at a feature size of 65nm it's clear the industry has "long ago" moved past using visible light.

We're already manufacturing objects which can never be "seen" with the human eye. Not merely because they're small, but because they are so small that visible light is too crude and clumsy an instrument to probe them with.

Take a moment to exhale ....

-john, the redundant legacy dinosaur mulling nostalgically about stone ages past
 
They're getting really close to using X-ray lasers to do the lithography (10 nm to 0.1 nm wavelength). So, while we're not at the limit just yet, we're starting to get close.
Getting really close? Just to put this in perspective, the wavelength of yellow light is around 570 nm. I don't know what they're using for lithography now, but at a feature size of 65nm it's clear the industry has "long ago" moved past using visible light.

We're already manufacturing objects which can never be "seen" with the human eye. Not merely because they're small, but because they are so small that visible light is too crude and clumsy an instrument to probe them with.

Take a moment to exhale ....

-john, the redundant legacy dinosaur mulling nostalgically about stone ages past

They've been using UV lasers for a while, but as they get closer to X-ray, they require more exotic items to make the lasers work. And unless there is some major breakthrough in particle physics, X-ray lasers will probably be the last lasers used for lithography as the bottom end of the scale for X-ray wavelenght is typically atomic diameter sizes (100 pm).
 
http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=194400307
EUV making slow progress, says Intel

So how does that tie in, considering that Intel roadmaps have previously said EUV with 32nm?

Historically, Intel has gotten more out of each generation that they had originally planned, so it stands to reason that they could hit 32nm without EUV.
 
IBM and AMD are at the same spot technologically. Together they don't invest as much in research as Intel. I don't expect them to catch up.

I don't expect them to fall further behind either. Being one generation behind Intel has its advantages since Intel does the gruntwork and pushes the silicon industry develop new machines and processes. Everyone else can just buy the machines and can start producing with much less expense. It's not quite THAT easy, but it's not as difficult as being the front runner.

For Intel to be the front runner for so long is a great testament to their engineers and scientists. Those guys know silicon. If Intel is predicting more than 2-3 years for a process shrink generation, then they've spotted some problems that no-one else even knows about.
 
Just read this, 32nm on track, but 22nm might not be here until 2015. I just wonder if that is where AMD will catch up on the process size front? The reason I ask this is because of the cooperation they have with IBM, and IBM is pretty inovative. I can't find anything on how IBM is doing 22nm fabrication, other than they are working on it. Any thoughts on this people?

wes

http://www.digitimes.com/bits_chips/a20061206PD207.html

AMD can't compete with Intel on the fabrication technology front due to the capital constrain. Although IBM is innovative, AMD just licenses IBM's technology, not co-develops with it.

what on earth gave you that idea dude, 2015 is a long time god I could catch up to 22nm by that time, i bet you were the sceptic when the AMD atlon kicked the P4's ass!
 
Well if you could catch up to 22nm then i'll be waiting January 1st 2015 to see what you've got to offer since it doesn't even look like we could be using silicon at that stage

your random knock on qcmadness could have been the worst comment in this thread. More and more problems arise as the size gets smaller and smaller so when they get to the very edge it takes longer to develop and will get to a point where the cost to die shrink will outweigh the performance gained, at that point i'm sure intel and amd will have found other materials to use

One thing to consider is diamonds, there has been previous talk of diamond being used theoretically allowing a speed of over 10Ghz and higher. not sure how practical this would be but i know intel has atleast looked into it.
 
what on earth gave you that idea dude, 2015 is a long time god I could catch up to 22nm by that time, i bet you were the sceptic when the AMD atlon kicked the P4's ass!


I have serious doubts as to your ability to make toast by 2015. Much less an operational 22nm node logic device.
 
Well if you could catch up to 22nm then i'll be waiting January 1st 2015 to see what you've got to offer since it doesn't even look like we could be using silicon at that stage

your random knock on qcmadness could have been the worst comment in this thread. More and more problems arise as the size gets smaller and smaller so when they get to the very edge it takes longer to develop and will get to a point where the cost to die shrink will outweigh the performance gained, at that point i'm sure intel and amd will have found other materials to use

One thing to consider is diamonds, there has been previous talk of diamond being used theoretically allowing a speed of over 10Ghz and higher. not sure how practical this would be but i know intel has atleast looked into it.

Diamond is a very cool material to build electronics out of. One of the best things is that it is the best thermal conductor known to man. About 5 times better than Cooper. It is radiation hardened by the nature of the material. It can work at 1000C with not to much problems. I don't know what its electron or hole mobility is so I can say it can be clocked to reach 10GHz.

Diamond is just a cool material. 😀
 
A couple of questions.

UV lasers? The shortest UV frequency band should be C and that's 100 to 280nm. According to my logic, that should be unable to even work for a 90nm process, a bit like soldering a motherboard with an oxyacetylene torch. What am I missing here?

I do know a bit about diamonds. I've certainly bought enough of them and then watched them walk out the door with the rest of my stuff. And what I do know is that for a crap G colour VVS2 rock not even reaching a carat you're gonna cough up a grand wholesale. Industrial diamonds go up exponentially from this. If you're gonna start computing with these damn things, is Bill Gates gonna be the only guy who can afford it?
 
What about synthetic diamonds?

I'd think that HPHT synthetics would be out but a CVD maybe boron and deuterium doped would be functional. However, you're still looking at several hundred or even thousands a carat.

I keep telling you all but you don't listen: Dilithium Crystals! 8)
 
At this point in time, I believe that IBM has run transistors at the smallest node. Some time late in 05, they were able to get a functioning transistor on 29 nano tech.
Does this mean that AMD has an advantage on smaller nodes? No.
The main reason why AMD has been slower to the gate (bad pun) on size reduction, has more to do with production than R&D.
AMD has limited production capability, but also limited demand.
As a result, they take thier time with testing. If they are too fast to a node, they will suffer more with lower yields. Then, once the process is mature, they will have an increased yield of 70%. What then would they do with the extra chips?
For AMD, node shrinks are a timing thing.
AMD is now looking at having three fully functional fabs up by around 2010. At that point, they may, or may not be interested in being first to node. I rather believe that they still wont be in any rush. It may seem like a big deal to be behind, but it's probably better to take the time, and save the wafers.
 
AMD and IBM have learnt much from trying to rush out 65nm parts.

It is unlikely they will make these mistakes on any future die shrinks. (Different mistakes more likely).

Best of luck to both parties, and anyone else who tries.

http://moneycentral.msn.com/stock_quote?Symbol=AMD
http://moneycentral.msn.com/stock_quote?Symbol=INTC
http://moneycentral.msn.com/stock_quote?Symbol=SUNW
http://moneycentral.msn.com/stock_quote?Symbol=NVDA

Sometimes these guys have tech news, or news that most techs can read from a tech point of view, a good week or two before IT websites.

Other times they can be 6 months behind.

It 'pays' to watch the news associated with share prices in this subject IMHO.

8)