A First Look at Intel's 14nm Fab 42 Manufacturing Facility

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

3cd8a33a.png

My thoughts when I was reading this.
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
[citation][nom]__-_-_-__[/nom]yes. It's way more cheaper to build a new fab from scratch then to convert one to a new process.that's why your calculations are wrong. a waffer is not a square, a chip is square. also a waffer always have some defects. only a certain amount of chips are good for use. some with disabled features and/or less capacities, some won't work. that's why there's always an optimum size for waffers (that have many factors envolved like materials used, process used, chip spec, fab specs and many many others) that's why waffers aren't just huge.[/citation]

im doing basic math, it focuses on the potential yield and not actual yield

fail rate for a manufacturing process should be less than 25% and as for wafer space not being used due to circle, add another 10-15%, the only reasons i dont include those is because i have no idea the real numbers.

ig you want a number closer to the real one lets say and 120$ at best for the cpu.
if i knew the area of a current ssd chip, i would go into the harder math, but as im baseing the current math on the cost of a 34nm ssd, the waffer being a circle and the failure rate is already part of the calculation
 

masterofevil22

Distinguished
May 13, 2010
229
0
18,690
AMD will die..

Intel will be making faster and better chips for cheaper and selling them for more. AMD, I'm sorry, but you are screwed.

BTW, I currently run a six core phenom II at 4ghz+, so I'm not an "intel fanboy", just stating the facts.
 
2 pages of comments and nobody said...

Intel has to build it to make room for the AMD employees! (Clonazepam dives under his desk to avoid the fodder)

Ok ok... obviously I don't really believe that, and only some readers might enjoy my humor.

Anyway, alidan, I appreciate your comments lately including some math and figures. Where do you get the information you do have for the calculations? Like price of wafer, etc. Also, here's a little trick I like to use sometimes when posting anywhere, comments, forums, etc. I type it in notepad first and proofread it before posting. It allows you to see the entire body of the text at once and helps me a lot. :D

At some point, is Intel going to feel pressure to throw more cash at AMD to avoid having themselves answering questions in front of a government committee?

(I use Intel and AMD products and try to buy the product that best meets the criteria of the build. My daughter's Llano-based system is still in the paint and braiding phase.)
 

Kewlx25

Distinguished
Since there's a lot of people talking about the x86 instruction set, I figured I'd pipe in.

I think they should do something similar to what they did with x64, but more strict. Make the new instruction features that every one wants, in a new "mode". If an app wants to run in that mode, they will have to not use old instructions. Each application can run in it's own mode, so there won't be an issue. Just like x64.

x64 apps and x86 apps can run side-by-side.

Eventually, older instructions would phase out as anyone who wants to use the newer features of a CPU, will have to switch to the new mode.
 
[citation][nom]Kewlx25[/nom]Since there's a lot of people talking about the x86 instruction set, I figured I'd pipe in....[/citation]

Ahhh... wth... I will too.

A little over a decade ago, Apple was developing OSX. From the beginning, they designed it to eventually transition over to Intel's x86 architecture. They didn't make that public knowledge initially. In that time period, before the launch of OSX, existed some very special prototype motherboards, that on a single board, contained both the old (RISC) and the new (Intel x86). So over ten years ago, a system existed that could utilize either or, though I cannot confirm if at the same time, or one or the other. I would heavily lean toward having to choose prior to boot, via a dip-switch, which CPU / Memory Controller, etc to use.

Having said that, we could see 3D processes that incorporate stacked layers on a single chip that could support both x86 / RISC-type / new ARM / etc all at once since they had them co-existing on a single motherboard so long ago.

Wouldn't it be cool if AMD "fixed" single thread performance by having a minimum of 2 layers, one optimized for single threaded performance, and one multi-core layer for today's and the future's multi-threaded apps? Throw in a 3rd layer that puts the other 2 to sleep to enable low power simple media consumption type performance similar to a smartphone's / tablet's capabilities?
 

Kewlx25

Distinguished
[citation][nom]Clonazepam[/nom]Having said that, we could see 3D processes that incorporate stacked layers on a single chip that could support both x86 / RISC-type / new ARM / etc all at once since they had them co-existing on a single motherboard so long ago.Wouldn't it be cool if AMD "fixed" single thread performance by having a minimum of 2 layers, one optimized for single threaded performance, and one multi-core layer for today's and the future's multi-threaded apps? Throw in a 3rd layer that puts the other 2 to sleep to enable low power simple media consumption type performance similar to a smartphone's / tablet's capabilities?[/citation]

We're moving towards that heterogeneous design already. AMD Fusion is part of the first step.

At some point in the future, we will have a regular CPU with a relatively few threads, but stronger single threaded performance, another CPU with many cores of lower performance, but the numbers make up for it(think larrabee), and GPUs style for heavy parallel processing.

The problem is designing a simple way to take advantage of these systems as throughput and latency will swing all over the place depending on which computational unit you use. We need a new framework to help programmers. AMD/Intel/Microsoft and others are all working on this problem.

Which instruction set doesn't really matter as the instruction set has little to do with how the CPU processes data. The instruction set is just an abstract later.
 

Well they only have 1 more year which is piledriver, and it is only comparible to intel's 1st gen core series cpu's. Looks like if you are going to use their cpu's for life you will be stuck on piledriver while we will be moving on to haswell and who knows what.Stop trolling and realise the truth.
 

GreaseMonkey_62

Distinguished
Jul 3, 2009
521
0
18,980
 
G

Guest

Guest
Actually, a lot of semiconductor manufacturers besides just intel manufacture in the US. IBM, TI, foreign manufacturers like Samsung, ST, and even Taiwanese foundries like TSMC. Semiconductor manufacturing isn't exactly the same as low-level assembly of consumer electronics. It's not exactly easy, or secure to send it to areas where the workforce is unskilled and there is little in the way of IP protection.
 
Status
Not open for further replies.