Intel's Future Chips: News, Rumours & Reviews

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

this not a part of 'why intel couples top igpu with top cpu' discussion ( useless hint - binning ofc, top bins get best of everything).
i have been considering this lately.
if one was to build an entry level number cruncher, fx8350's competition will win solely because of fx's lack of an igpu. worse if you're considering deploying office pcs (or an oem who's building those) and have to choose between fx and i5. second would be higher power cost which will add cost to mobo design/choice and psu, resulting in higher assembly cost and complexity for mass production. cooler cost/complexity would also add up.
btw, apus won't have this issue, especially the 65w ones.
may be there are ways to make the fx champion, i haven't figured those out yet.
 


OEMs purchase more cheaper parts, while custom builds more often use higher price points.
 


FX was designed as an enthusiast part, general office use was never envisaged, graphics design, content creation, number crunching are all more accepted usages where either a lowly $40 GPU/output was needed or professional GPU's. The A-Series is more than copious for office and home usage.



We basically do custom PC so OEM's are not used but that is true, OEM's also use re brands, the fX8130 was initially a OEM, the HD8950 is basically a re badged HD7950 for OEM's as it stands based on GCN 1.0 southern islands arch all OEM.


 
off topic :

How are the sales and user reviews of Nexus10 ? when its specs were released, i thought it would set the world on fire. But since then, teh interwebz has been pretty silent over it.
 



From Semiaccurate
All of the 32nm, 28nm, and 20nm processes have been announced previously, although a few suffixes have been added. The 20nm process is about what you expected, the next step in planar transistors shrunk to 20nm. There are no FinFETs like Intel is using at 22nm, but GloFo and the CP partners are moving to gate last this time around. This entire process is pretty evolutionary.

That changes quite a bit on the 14XM process, it is most definitely not evolutionary on the transistor front but the rest is. Because of the way the last one was announced, we didn’t bother to write it up at the time but it is worth looking at. Unlike what the name implies, 14XM is not really a full shrink of the 20LPM node to 14nm, it is only half way there. 14XM uses the middle and back end of line (MEOL and BEOL) from the 20nm node couples to the new 14nm FinFETs.

Global Foundries says this effective reuse of the entirety of the non-transistor bits of 20LPM will pull in 14XM, lower cost presumably through the reuse of existing equipment, and lower customer design costs because most of the software for 20nm just carries over.

Here's a quote from Idontcare over at Anandtech

Extremely far apart. Just as AMD and IBM were completely caught off-guard by Intel's aggressive development and adoption of HKMG into production at 45nm, they were even more caught off-guard by the development and adoption of FinFet into production at 22nm.

So what you saw, and continue to see, is IBM and GloFo operating in crisis mode, rushing under-developed process technologies through the R&D pipeline and making ill-advised tradeoffs in the process (bad 32nm dielectric decisions, gate first decision, 28nm disaster, etc).

And they are continuing that tradition with Finfet and 14nm...rushing an underperforming FinFet product (it can only manage enough Idrive to power mobile devices without burning itself up, if they try and power it with enough current and voltage to hit GHz speeds needed for CPUs and GPUs then it dies very quickly) to market for 20nm but re-labeling it a 14nm-XM product because they can't figure out how to rush the 14nm BEOL (metal wiring) to market at the same time.

The gap between Intel and GloFo continues to grow, we see it in their limited release of Finfet for 14nm (mobile only, not high performance) and the lack of scaling in the BEOL. GloFo's 14nm-XM customers will be ill-equipped to field cost or performance competitive parts if those customers are competing with Intel or high-performance customers of TSMC.

Even though TSMC is doing the same shenanigans with the BEOL not shrinking to 16nm, at least they have do intend to field finfet transistors that are robust enough to function (and survive) in the higher voltage and current environment that comes with the MPU version of their 16nm node.

It is difficult to see a silver lining in GloFo's looming dark clouds TBH. Their technology roadmap is not competitive even if they manage to pull it off without delays the likes of which 32nm and 28nm have experienced
 
Well, there's no arguing that Intel has top notch Fab capabilities.

There's no doubt about it. On a given node, saying GF or TSMC have 3/4's of Intel's ramping is being generous IMO.

But one thing that got my attention. Why 28nm could be "disastrous"? To me, that node has been good so far; a bit too solicited, but with good ramp numbers... AMD, Qualcomm and another big one had good 28nm production from the start IIRC. I could be wrong on that last part though.

Cheers!
 


Disastrous for Glofo because their 28nm process was delayed so long. They failed to ramp quickly enough to earn a significant amount of design wins and lost all of the wafer mark-up from having a leading edge process.
 


He's refering to GloFo (and IBM) only, because they lost out to TSMC.
 

their recently published roadmap says so. i didn't quite understand the acronyms but it looked like their 28nm process is the one for low power socs, not the one for high performance cpus. i could be wrong about this.
tsmc's process is not optimized for high performance cpus either, afaik. their 28nm process has options for low power socs, ulp socs and gpus... i think.
 


And those GPUs run at around 1Ghz.
That is a far cry from the ~4Ghz desktop CPUs.
 


But GPU's have hundreds of cores; thats a far cry from the four most CPU's have.

Totally different architectures: Big cores versus many small cores. They excel in different types of tasks.
 


Yes but the 3-4 GHz range requires high performance VLSI design tools and fabrication process, while ~1GHz requires a completely different process and potentially bulk design methodology. A highly complex chip like a 2000 shader GPU running at a mere 1GHz may not require a high performance process.

Granted I don't work for TSMC, NVidia or AMD, so I can't say with any amount of certainty.
 


Intel's CPU line up is very strange to me.

The only CPU below $200 I would ever consider would be a something like a Pentium G2020 which I can get for $59, so I could make a useful cheap 2nd computer with it, and rather than buy $130 i3's, I would just rather get the i5 3570K for $230, and have an absolute CPU powerhouse at my disposal.

Nothing else in their line up between those two processors, or even the Celerons for $45, makes any sense at all to me.
 

I'm sure those in-between sku's are most often gobbled up by OEM's rather than directly by end customers. They are likely getting bulk purchase discounts on certain chips, and probably trying to hit more specific thermal /power/feature requirements than the average joe cares about.

The majority of end customers who buy silicon directly are going to be focused on 3-4 sku's at most.
 
Status
Not open for further replies.