A First Look at Intel's 14nm Fab 42 Manufacturing Facility

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]neoverdugo[/nom]The only 2 things intel needs to do are:1) Drop its price tag2) Retire the x86 and create a new architectureAnything else?[/citation]

1) If you think that Intel is overpriced then you must be a moron. My first Pentium 3 1GHz coppermine processor was ~$300 in 2000. And it did practically nothing, and did it very slowly (bit it was still a great chip for it's day). Fast forward to 2011 and I picked up an i7 2600 for $250, it does a ton of work with very little heat, and can do 8 threads at a time, each of which is worlds faster than the original Pentium 3 I had. Even AMD cannot compete on a !/$ except in the lowest end of markets. Sure I would love prices to come down as much as anyone, but before you would have to upgrade your system every year or two to keep up with production work and gaming. Now you can game fairly well on a 4-5 year old C2Q with a modern GPU.

2) x86 is not efficient for small loads (it was not designed to be), but it is way more efficient than ARM on large loads, which is why it will not go anywhere any time soon. Once we hit the size barrier of ~8-10nm we will start seeing chip design changes. First it will be with vastly more efficent instruction sets, and followed by 3D or Stacked chip designs, followed by a move to trinary or some other form of computing where more information can be handled per bit. So long as there is binary x86 will live a long and healthy life. It is known, it is fairly secure, it scales fairly well from medium to heavy loads, and is not all-together terrible at light loads. Moving to a new architecture means moving away from a secure and well known base and starting all over, which nobody really wants to do (but they will eventually when they have to). It also means that all of your software is gone, which may not be a huge deal for home users where it is cheap and easy to upgrade, but it is a major pain for corporations who spend the real money in the market. As things move to cloud and web based applications this will no longer be an issue, or at least become small enough of an issue that companies will not mind switching over.
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
Wait, 14nm? That's a half-node; I wasn't aware Intel had intents on skipping 16nm for Broadwell and Skylake. Interesting news, that Intel's ceasing with using only full-node steps for its CPUs.
[citation][nom]eddieroolz[/nom]Does Intel build a new fab for every process node? It certainly seems that way from my impressions.[/citation]
No, they don't, but they DO have to build a new fab if they want to make a bigger wafer. The same wafers can be used for all fabrication processes, though.

[citation][nom]willard[/nom]Methinks the people calling out x86 don't really understand what it really is, just a set of opcodes. Different is not better by default.[/citation]
That, and a lot of people fail to understand some of the basic concepts of engineering. Most commenters who favor the replacement of x86 by ARM are under the illusion that you can get the "best of all worlds" in any design category.

Hell, most of them probably aren't even aware of much more than what ARM is beyond "it's used in all those low-powered gadgets" and "it's RISC." They operate under the impression that ARM is pure RISC (or x86 post-P6 is pure CISC) and that RISC is somehow more efficient.

[citation][nom]alidan[/nom]therea 300mm waffer has 282600 mm of working area by my math (max possible, not whats actually used) a 450mm waffer has 635850 mm of working area by my math (max possible again)[/citation]
You made a bit of a mistake there: you used the diameter rather than the radius for the wafers. Hence your numbers are about four times as high as they should be. The ratio of increase remains correct, though.
 
G

Guest

Guest
amdfreak said:
"AMD is just doing well. Better buying a system based on AMD platform then Intel. Why ?
1.) Cheaper cost of the processor and motherboard. At these kind of Ghz speeds and parallelism clock no longer plays a huge role because the bottleneck is the disk.
2.) Invest the difference gained from processor and motherboard into the SSD instead of HDD.
3.) AMD with SSD outperforms any Intel with HDD. For the same money you get more from AMD.
4.) AMD outsmokes any Intel graphics.
Go AMD."

That is a ridiculous post. First of all, you don't really get a savings because a 2500k system beats AMD's best processors so value priced Intel systems are better performers and save you on energy far more than AMD systems. Intel processors are just more efficient so you save in the electric bills and you save in time of work done.

Again, since value priced Intel CPUs go for around the same price as AMD CPUs, you save almost nothing going with AMD. You certainly wouldn't save enough to go with an SSD over an HDD. You would also lose a ton of space going to a 128GB SSD over a 2 GB hard drive solution so it's usefulness is very limited and still pricier.

Lastly, the AMD GPU is only better in high end 3D gaming. For most all applications (like 99% of them), you would see almost no difference in the 2 platforms GPU wise. If you are playing high end games, then yes the AMD platform is better, but then again, you would have a dedicated card if you did that thus negating any benefits of the AMD GPU. The Intel CPU/GPU performs better in encoding and decoding video as well and does just fine doing 1080 blue ray. The only real world advantage of AMD is DX11 (still minimally used in the majority of apps) which IvyBridge addresses.

Buying AMD just isn't a good investment I hate to say unless you can get one of those super, bargain, basement deals for a system you can buy for your kids (because they don't need anything that does anything significant).
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
[citation][nom]amdfreak[/nom]AMD is just doing well. Better buying a system based on AMD platform then Intel. Why ?1.) Cheaper cost of the processor and motherboard. At these kind of Ghz speeds and parallelism clock no longer plays a huge role because the bottleneck is the disk.2.) Invest the difference gained from processor and motherboard into the SSD instead of HDD. 3.) AMD with SSD outperforms any Intel with HDD. For the same money you get more from AMD.4.) AMD outsmokes any Intel graphics. Go AMD.[/citation]

intel vs amd, for all purposes for the casual user, it doesnt matter, so go the cheaper phenomII

for the gamer, it depends on the game, and how anal they are about being the fastest, for most people, go phenom II, for the speed freaks go intel i7, if its a black friday kind of deal, or you find a large rebate, go i5 or i7.

now for the people who make videos and encode, id suggest invest the money into an nvidia card, and a cuda accelerated encoder. if thats to expensive, look at an i7

now on the topic of ssd, just browsing files the cpu has out striped the hdd probably sense the p2 era, maybe earlier, once a program is loaded, most of the crap is ram and cpu based. but if you push your computer, an ssd is almost a must have at least for a boot. i wish i could tell you how good it is, but i went from xp 3gb of ram and a 1.5tb hdd that was heavily fragmented, a system i pushed as hard as this one and i use 7.5gb of ram allot, and went to win 7 a 120gb ssd and 8gb of ram, so i cant tell you what made the whole impact... what i can say is im not waiting on folders to open any more, on the hdd or the ssd.

[citation][nom]nottheking[/nom]Wait, 14nm? That's a half-node; I wasn't aware Intel had intents on skipping 16nm for Broadwell and Skylake. Interesting news, that Intel's ceasing with using only full-node steps for its CPUs.No, they don't, but they DO have to build a new fab if they want to make a bigger wafer. The same wafers can be used for all fabrication processes, though.That, and a lot of people fail to understand some of the basic concepts of engineering. Most commenters who favor the replacement of x86 by ARM are under the illusion that you can get the "best of all worlds" in any design category.Hell, most of them probably aren't even aware of much more than what ARM is beyond "it's used in all those low-powered gadgets" and "it's RISC." They operate under the impression that ARM is pure RISC (or x86 post-P6 is pure CISC) and that RISC is somehow more efficient.You made a bit of a mistake there: you used the diameter rather than the radius for the wafers. Hence your numbers are about four times as high as they should be. The ratio of increase remains correct, though.[/citation]

nice catch, thought those numbers were high,
the numbers are 70650 and 158962 respectively, and the ratio is the same... been awake a bit to long to really do math at 100%

with arm, people are seeing that what the can do on a laptop is being done or close to on a tablet with an arm. i can see for the majority of people arm can be the solution, and arm could brute force for probably cheaper and cooler in the server area, i really want to see how that battle turns out.

most people who know of x86 also know its really really old, and dont think twice about compatibility problems if every thing were to swich to something... new, they also want 64bit and associate x86 with 32bit... i dont know this well enough, but is x86 base a 32 bit but what we use as 64 bit currently is a modified x86? bit confused in that aspect, than again never looked it up to much to know.
 

rpmrush

Distinguished
May 22, 2009
175
0
18,690
Anyone who says AMD is better or there is no reason not to buy an AMD cpu over Intel...play Skyrim. Fall in love with it...and then I'll watch you ditch your Bulldozer 8 core or Phenom X6 for an i5 2500k. Don't get me wrong..I want AMD to put up a fight and see the top again..they just aren't on track to do so.
 

peevee

Distinguished
Dec 5, 2011
58
0
18,630
[citation]i dont know this well enough, but is x86 base a 32 bit but what we use as 64 bit currently is a modified x86? bit confused in that aspect, than again never looked it up to much to know.[/citation]

x86 was modified with every new processor architecture.
It was 16 bit initially in 8086 and 8088 with only integer arithmetics.
Then 8087 added 80-bit floating point on a separate chip, later (in 486) integrated into one.
Then 80186 added a few new commands.
Then 80286 added a whole new protected mode.
Then 80386 added 32-bit commands and new "flat" protected mode.
Then 80486 added several new commands and integrated FP coprocessor.
Then Pentium added several new commands.
Then Pentium MMX added a bunch of MMX (integer vector) commands.
Then Pentium Pro added several new commands, but without MMX.
Then Pentium II combined MMX and Pentium Pro's new commands..
Then Pentium III added SSE (vector FP) registers and commands (and AMD added 3DNow!).
Pentium 4 added a whole bunch of new commands with SSE2, and AMD added all commands in 64-bit variants. Later Pentium 4s added the 64-bit commands (and SSE3) and AMDs implemented SSEx.
Then Core 2 added SSSE3.
Then Penryn (later Core 2) added SSE4.1.
Then Nehalem (first-gen i*) added SSE4.2.
Then Sandy Bridge (2nd gen i*) added AVX...

Now it is a whole lot of mess, and the decoder module to deal with it is big and power-hungry. No wonder Intel had to add L0 holding already decoded microcommands to bypass this module at least in tight loops.

ARM is not much better. It was simple and 32-bit to begin with, but did not have FP and the code was long (RISC, only special commands could address memory directly as opposed to every command of x86, and commands were 4-byte long, so more and longer instructions were needed to do the same, taking more memory and more cache). So they added Thumb, shorter versions of the commands. Then FP instructions. Then the whole Java craze, so they added Jazelle, large subset of Java byte code in a hardware (later dropping actual hardware support, so each Java code now just generates an interrupt). Then added Thumb 2. Then optionally (!) they added support for NEON, their vector instructions to compete with SSE. And they are still not even 64-bit...
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
[citation][nom]rpmrush[/nom]Anyone who says AMD is better or there is no reason not to buy an AMD cpu over Intel...play Skyrim. Fall in love with it...and then I'll watch you ditch your Bulldozer 8 core or Phenom X6 for an i5 2500k. Don't get me wrong..I want AMD to put up a fight and see the top again..they just aren't on track to do so.[/citation]

i play it at 1920x1200 with... dont know the graphic settings, but i set them lower so i could be guarenteed smooth gameplay no mater what, i do that in all games, and on my phenom II 955 be i see no reason to upgrade... and the 6 core phenom tends to do worse in benchmarks than the 4 core.

now im intrested, if you disable 4 of the 6 cores, will it preform better...
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
[citation][nom]peevee[/nom][citation]i dont know this well enough, but is x86 base a 32 bit but what we use as 64 bit currently is a modified x86? bit confused in that aspect, than again never looked it up to much to know.[/citation]x86 was modified with every new processor architecture.It was 16 bit initially in 8086 and 8088 with only integer arithmetics.Then 8087 added 80-bit floating point on a separate chip, later (in 486) integrated into one.Then 80186 added a few new commands.Then 80286 added a whole new protected mode.Then 80386 added 32-bit commands and new "flat" protected mode.Then 80486 added several new commands and integrated FP coprocessor.Then Pentium added several new commands.Then Pentium MMX added a bunch of MMX (integer vector) commands.Then Pentium Pro added several new commands, but without MMX.Then Pentium II combined MMX and Pentium Pro's new commands..Then Pentium III added SSE (vector FP) registers and commands (and AMD added 3DNow!).Pentium 4 added a whole bunch of new commands with SSE2, and AMD added all commands in 64-bit variants. Later Pentium 4s added the 64-bit commands (and SSE3) and AMDs implemented SSEx.Then Core 2 added SSSE3.Then Penryn (later Core 2) added SSE4.1.Then Nehalem (first-gen i*) added SSE4.2.Then Sandy Bridge (2nd gen i*) added AVX...Now it is a whole lot of mess, and the decoder module to deal with it is big and power-hungry. No wonder Intel had to add L0 holding already decoded microcommands to bypass this module at least in tight loops.ARM is not much better. It was simple and 32-bit to begin with, but did not have FP and the code was long (RISC, only special commands could address memory directly as opposed to every command of x86, and commands were 4-byte long, so more and longer instructions were needed to do the same, taking more memory and more cache). So they added Thumb, shorter versions of the commands. Then FP instructions. Then the whole Java craze, so they added Jazelle, large subset of Java byte code in a hardware (later dropping actual hardware support, so each Java code now just generates an interrupt). Then added Thumb 2. Then optionally (!) they added support for NEON, their vector instructions to compete with SSE. And they are still not even 64-bit...[/citation]

from reading that i have a bit more understanding, it seams like x86 could stand to be thinned out a bit, and you could probably do it too, but you would kill legacy compatibility. would need an emulator before that could happen than.
 

iLLz

Distinguished
Nov 14, 2003
102
1
18,680
[citation][nom]eklerus[/nom]omg Intel look at the cpu die size in the 3rd picture it's to big for me ^^[/citation]

You sir, made me laugh. Thank you!
 

zanny

Distinguished
Jul 18, 2008
214
0
18,680
[citation][nom]nottheking[/nom]Wait, 14nm? That's a half-node; I wasn't aware Intel had intents on skipping 16nm for Broadwell and Skylake. Interesting news, that Intel's ceasing with using only full-node steps for its CPUs.No, they don't, but they DO have to build a new fab if they want to make a bigger wafer. The same wafers can be used for all fabrication processes, though.That, and a lot of people fail to understand some of the basic concepts of engineering. Most commenters who favor the replacement of x86 by ARM are under the illusion that you can get the "best of all worlds" in any design category.Hell, most of them probably aren't even aware of much more than what ARM is beyond "it's used in all those low-powered gadgets" and "it's RISC." They operate under the impression that ARM is pure RISC (or x86 post-P6 is pure CISC) and that RISC is somehow more efficient.You made a bit of a mistake there: you used the diameter rather than the radius for the wafers. Hence your numbers are about four times as high as they should be. The ratio of increase remains correct, though.[/citation]

They don't run on the same standards track as everyone else on fab nodes, they are going to 22 where everyone else is at 28, the 28 will go to 16 where Intel will go to 14, etc. They only do it because they got in house samples at smaller gate lengths and just went with smaller chips rather than sticking to the global roadmap.

I do have to congratulate Intel for building all their fab plants in the US. They are one of the only major manufacturing industries still here, and all I can fear is how badly our shitty Congress seems hell bent on getting them to leave.

Globalfoundries is centered in Germany but has some fab plants in New York, and they were the spinoff of AMD's manufacturing arm. However, Bulldozer and Llano are being fabbed by TSMC (just like everything else and their mother) because Globalfoundries has been horribly managed since it opened and they haven't been keeping up to pace with the nanometer race. Which is sad, but Intel is the only real major x86 fabricator in the US manufacturing wise, so it makes me feel better supporting their near monopoly on consumer desktop CPUs.
 

madooo12

Distinguished
Dec 6, 2011
367
0
18,780
[citation][nom]neoverdugo[/nom]The only 2 things intel needs to do are:1) Drop its price tag2) Retire the x86 and create a new architectureAnything else?[/citation]
I think 90% of the instructions on x86 aren't ever used
I think an open architecture like SPARC but made by someone big who can make compilers for it's platform fast, make the GCC support it and stuff like that

how can we use closed architectures which only three companies can manufacture it (on the bright side there won't be any chineese clones) that way improvement will be slow and there will be no competition
 

madooo12

Distinguished
Dec 6, 2011
367
0
18,780
[citation][nom]Zanny[/nom]They don't run on the same standards track as everyone else on fab nodes, they are going to 22 where everyone else is at 28, the 28 will go to 16 where Intel will go to 14, etc. They only do it because they got in house samples at smaller gate lengths and just went with smaller chips rather than sticking to the global roadmap.I do have to congratulate Intel for building all their fab plants in the US. They are one of the only major manufacturing industries still here, and all I can fear is how badly our shitty Congress seems hell bent on getting them to leave. Globalfoundries is centered in Germany but has some fab plants in New York, and they were the spinoff of AMD's manufacturing arm. However, Bulldozer and Llano are being fabbed by TSMC (just like everything else and their mother) because Globalfoundries has been horribly managed since it opened and they haven't been keeping up to pace with the nanometer race. Which is sad, but Intel is the only real major x86 fabricator in the US manufacturing wise, so it makes me feel better supporting their near monopoly on consumer desktop CPUs.[/citation]
two things

Bulldozer was manufactured at GloFo
GloFo is the third largest independant manufacturer of silicon chips (you considered it not major)

for the guy you quoted, x86 shouldn't be replaced by ARM but it should be replaced by something more powerful and efficient, as i said before 90% of the insructions on x86 aren't used and another 7-8% is rarely used so they can easily be emulated, than we'll have smaller dies, more cores and better power efficiency

worset of all x86 is closed (read my previous comment)
 

madooo12

Distinguished
Dec 6, 2011
367
0
18,780
[citation][nom]willard[/nom]Why are people calling for an end to x86? What exactly is it that you think is so bad about the instruction set? Current development tools and operating system kernels are very mature. Changing the instruction set would require abandoning all that, and for what?Methinks the people calling out x86 don't really understand what it really is, just a set of opcodes. Different is not better by default.[/citation]
i think you don't know what x86 is, it's a closed architecture that is which only three companies can manufacture (untill the AMD/VIA license ends so only intel will then manufacture x86 chips )
most operating systems and stuff only require little modification to run on a newer arcitecture, the linux community can easily port linux (it runs on a gazillion architectures and it has companies like google, yahoo and many more to back up development) and MS has billions to back up development
development tools can be easily ported too by the company who made a newer architecture
x86 has been there for tens of years and is not as efficient as newer architectures
did you know that most of the instructions in x86 aren't even used and should be removed

another thing to show you that the new architecture can be easily adopted
architectures like PPC, X86-64, Itanium, ..... have been made relatively recently and are now widely supported
 

rantoc

Distinguished
Dec 17, 2009
1,859
1
19,780
[citation][nom]DRosencraft[/nom]It is good that a company is willing to put in the time and cash for R&D, and better for the US that Intel is tryng to add more work in the states. But, I'll wait until they actually produce results from this. Because money is not the only factor that goes into good design and producing a good product. They could just as easily be blowing a ton of money on a project that will ultimately fail.[/citation]

How many times have intel built new factories with new process and how many times have they failed? I think no company in the tech world can match the execution of process shrinking as intel have and even with this new bigger waffer size i bet they will be on schedule.
 

billybobser

Distinguished
Aug 25, 2011
432
0
18,790
doesn't matter really how AMD jiggles around their architecture. They get their chips made by a third party, and that third party probably has to read intel's notes on wafer manufacture.

I imagine AMD will just be hitting their stride on 32nm when intel are pumping 14nm.

I used to favour AMD (one step before fanboyism), however recent changes have swayed me back to neutral. Their new CEO is an ass, the marketting is shoddy and their price strategy would be a scale of Intel were it not for its suck product. I would even say intel are keeping AMD honest on price and not vice versa.

Though I'll be stuck with AMD gfx cards due to Nvidia being 'marginally' overpriced.
 

danwat1234

Distinguished
Jun 13, 2008
1,400
6
19,315
When did Intel change from the next lithography size being 16nm to being 14nm? That is more than a 50% decrease from 22nm. And, that would make 11nm be less than a 50% decrease from 14nm.
 

__-_-_-__

Distinguished
Feb 19, 2009
419
0
18,780
[citation][nom]eddieroolz[/nom]Does Intel build a new fab for every process node? It certainly seems that way from my impressions.[/citation]
yes. It's way more cheaper to build a new fab from scratch then to convert one to a new process.

[citation][nom]alidan[/nom]a 300mm waffer has 282600 mm of working area by my math (max possible, not whats actually used) [/citation]
that's why your calculations are wrong. a waffer is not a square, a chip is square. also a waffer always have some defects. only a certain amount of chips are good for use. some with disabled features and/or less capacities, some won't work. that's why there's always an optimum size for waffers (that have many factors envolved like materials used, process used, chip spec, fab specs and many many others) that's why waffers aren't just huge.
 

robochump

Distinguished
Sep 16, 2010
968
0
18,980
So what happens to the old fab plants? A little surprising older fab plants cant be retrofitted for new chip manufacturing. Either way I cant wait for 14nm CPUs to come out!
 

__-_-_-__

Distinguished
Feb 19, 2009
419
0
18,780
[citation][nom]robochump[/nom]So what happens to the old fab plants? A little surprising older fab plants cant be retrofitted for new chip manufacturing. Either way I cant wait for 14nm CPUs to come out![/citation]
It's not surprising. read a bit about how cpu's are made, about the waffer producing etc you'll understand immediately. you can change the chips itself, for example producing 14nm gpus or cpu's is almost the same. they use the same process. a new process requires a new fab.

what happens? people get fired (sometimes) and fabs are phased out. the facilities are usually sold with or without equipment. The equipment is still good for many things. most chips don't need to be high-end.
Have you looked closer at for example a motherboard? have you seen how many little chips it has? all those have to be produced too.
All those small chips need to be produced and don't require expensive technology. they can be produced with 140nm in 10year old fabs.
just look around and see that you have electronics almost everywhere. from a simple tv remote to a freezer or a microwaves, cell phones, digital wrist watches, digital calculators, rf id chips, door bells, cars etc
have you imagined to need an high end 14nm hexacore cpu for a simple monochrome digital calculator? even a 1mhz cpu would do. but all those have to be produced too.
 

zloginet

Distinguished
Feb 16, 2008
438
0
18,790
[citation][nom]rpmrush[/nom]Anyone who says AMD is better or there is no reason not to buy an AMD cpu over Intel...play Skyrim. Fall in love with it...and then I'll watch you ditch your Bulldozer 8 core or Phenom X6 for an i5 2500k. Don't get me wrong..I want AMD to put up a fight and see the top again..they just aren't on track to do so.[/citation]

Sorry, won't be ditching my system.... No need too...
 

the_brute

Distinguished
Feb 2, 2009
131
0
18,680
[citation][nom]willard[/nom]Why are people calling for an end to x86? What exactly is it that you think is so bad about the instruction set? Current development tools and operating system kernels are very mature. Changing the instruction set would require abandoning all that, and for what?Methinks the people calling out x86 don't really understand what it really is, just a set of opcodes. Different is not better by default.[/citation]
Over head. its VERY OLD and very bloated. All of the old is there with new added ontop.
there will be a day that x86 will get replaced, but as someone else said that will not happen for a while. Yes for now its good enough, and Intel is going to put the extra $$ into fighting ARM then in a new material to use silicon is comming up to its smallest possible size Very fast. So I say soon after Intel wins those they will finally update/replace x86. That will be a process taking many years and yes many people will be pissed off by the move, but it will happen.
 
Status
Not open for further replies.