Is Intel Doomed? (Plus random stuff): Part 2

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
http://www.globalfoundries.com/technology/tech_elements.aspx

GLOBALFOUNDRIES is taking a leadership position in the foundry industry with our introduction of HKMG at the 32nm node. We are seeing performance improvements of up to 50 percent in the 32nm generation at the same leakage levels of the 45nm generation.

I'm sure there must be some truth in that, I mean if they've already gained 2 new customers in STM and Qualcomm - I'm sure they asked to see some evidence of these claims rather than blindly accepting GF's word. :)

 
http://www.globalfoundries.com/technology/tech_elements.aspx

GLOBALFOUNDRIES is taking a leadership position in the foundry industry with our introduction of HKMG at the 32nm node. We are seeing performance improvements of up to 50 percent in the 32nm generation at the same leakage levels of the 45nm generation.

I'm sure there must be some truth in that, I mean if they've already gained 2 new customers in STM and Qualcomm - I'm sure they asked to see some evidence of these claims rather than blindly accepting GF's word. :)

Yea 50%.... last time anything was near 50% performance improvements it was 40% and never went that high.....
 
Intel's fabs are about 3-4 years ahead of anyone else in the industry. AMD has to make up for that with arch design. Intel is popping out 22nm next year. Should be a fun year.

Intel's "entry" level CPUs in 2011 are going to be quadcore hyper-threaded with 128K L1, 1MB L2, 16MB L3, can do 2 Double-precision FP per cycle, 8 per cycle with SSE and AVX which is suppose to be double SSE so 16 DP FP/cycle.

These will be low end. 6 and 8 core CPUs are going to be mid/high end.
 

ATi has traditionally been good when it came to pricing. I'm not sure this is an AMD strategy as the folks who came up with the strategy for the 4850/4870 were ex-ATi.

AMD traditionally charged less than Intel but AMD did bump up their pricing in the K8 days.. and with the Athlon64 X2 they had pushed the prices way way way high (I remember paying over $600 CAD for an AMD Athlon64 X2 4200+). It was the Core 2 Duo series that brought pricing down to a more reasonable level.

Intel has traditionally charged more, I'll give you that, but sense Otellini took charge... they have some very solid pricing segments (they still have a $1000 CPU but it's marketed towards those who would pay the premium for the extra perks).

Have a look down bellow (E6600 pricing vs A64 X2 both being the middle of the road part from each manufacturer):
A64X2.jpg

E6600.jpg
 
cuz they had better marketing than AMD

seriously no joke

i always thought AMD were a second rate crappy brand

until i came here and heard Jenny ramble about AMD being so good i never looked at AMD seriously
 

And amazingly Jenny still insists that Intel isn't perceived as the premier CPU maker by the masses.

I think she got caught up in her own obfuscation and figured the best way out was to just lie.
 


I would say ignorant to the fact that her opinion is just hers and that there is a world outside of gaming.

But she has gotten worse since the ban thingy went on.

 
http://www.globalfoundries.com/technology/tech_elements.aspx

GLOBALFOUNDRIES is taking a leadership position in the foundry industry with our introduction of HKMG at the 32nm node. We are seeing performance improvements of up to 50 percent in the 32nm generation at the same leakage levels of the 45nm generation.

I'm sure there must be some truth in that, I mean if they've already gained 2 new customers in STM and Qualcomm - I'm sure they asked to see some evidence of these claims rather than blindly accepting GF's word. :)

Interesting, however the IEDM article I linked to in another thread shows Intel's 32nm gate last approach as significantly better. Hmm, who to believe - the company's own PR sheet written by the marketing dept. (or IBM), or an independent engineering news site??

Also, your link shows this:

The "Gate Last" approach to HKMG is costly and requires a number of additional processing steps. GLOBALFOUNDRIES has chosen to implement a "Gate First" approach because it is simpler and more scalable to future generations. The process flow is very similar to what was used for previous technology generations. The "Gate First" maximizes power efficiency and transistor scaling while minimizing die size and design complexity when compared to the alternative "Gate Last" approach.

Now why is GF pressuring IBM to abandon the gate-first approach, esp. on 22nm and later nodes, if the above is true? My guess is because the above was written by IBM or GF's marketing dept.
 
The problems it appears is stabilizing it, for consistency. This isnt mentioned, but was per your links.
If theyve figured this part out, everything else theyre saying is true, and this may posibly be a better approach.
I tend to think of it this ways. gate first is the right way, if the problems gets worked out, as its easier for clients, and if its superior, Intels secong gen may be more a hindrence than a success, in which I mean, theyve already chosen this path, and wont abandone it easily, and if its inferior, only time will tell.
So, if this works out, yes, its basically IBMs word against Intels.
If not, the switch will happen to gate last, where clients will just have to adapt their designs accordingly
 


The problem with Jenny's GF link is that it is quite short on useful facts, such as exactly what "performance improvements" are "up to 50%". Interestingly enough, "50%" is about the areal density improvement (# transistors per unit area), when moving to a full node shrink. If that's the "performance improvement" being referred to, that's pretty lame. So that's why I think it was either written by the marketing dept. or IBM, who developed the process.

As stated in the IEDM article, the gate-last approach results in a large amount of strain for the p-channel FETs, which is why Intel's 32nm p-channel transistors are nearly as good as the n-channel devices in drive current per unit area. Of course, the disadvantage is more steps required. And it also avoids having to find just the right halfnium blend to avoid problems in the high-temp anneal.

Gate-first is the traditional way, not necessarily the 'right way' - the advantage being that it results in self-aligned gates. It would be quite unfortunate if the gate area had a gap so that there was no gate over a portion of the channel adjacent to the source or drain, mainly because the transistor would no longer work. That's why Intel's gate-last approach actually uses a sacrificial or temporary gate for self-alignment, instead of relying on mask positioning to align (probably impossible to reliably accomplish much below the micron level, let alone nanometers). The temp gate is etched away prior to the final gate placement.

As for STM & Qualcomm, my bet is that they went with the foundry that offers the lowest price or biggest kickback - sorta like what the AMD fanbois accuse Intel of doing 😀.
 


What makes me laugh is that she had some good points.

Some great points infact.

But she lost it. Totally lost control over the whole plot.



Reminds me of Thundermans rants infact she / he types in exactly the same way.



Im not dissing her but she did go on to the uncomprehensible in the end, shame as she really started off well.
 

Well from what Im hearing people wont miss or cant miss it, and I think theyve solved their heat issues giving a lil more space for heat dispersion
nv30.jpg
 
Fermi better be better than the 5870, and much better at that, since its 50% larger, tho it wont be 50% better.
Im more interested in the gpgpu side of things, its usage of its cache, and what apps it can run, reducing work on cpus
 
Status
Not open for further replies.