Intel Has Banner Quarter, Announces 10nm Systems Available In 2019, Stock Tumbles

Status
Not open for further replies.
Intel's faster ramp of 10nm data center products is going to be a critical component in fending off AMD's 7nm EPYC Rome processors that arrive early next year. Even with a shorter gap between Intel's desktop and data center processors, AMD has a relatively large window it can exploit with 7nm processors.
This is all assuming, too, that Intel's first generation 10nm is actually tangibly better than the now extremely mature 14nm process. We saw a few years ago that Intel's first commercially available 14nm CPUs were inferior in raw clock speeds and power efficiency to products on the mature 22nm node. Even a year later the 14nm Broadwell-E chips couldn't clock as high as the 22nm Haswell-E CPUs with matching core counts. That latter example likely has more to do with heat dissipation and density than raw silicone performance/efficiency. Nevertheless, the point remains that it wasn't until Skylake that we actually saw meaningful node-related efficiency and clock speed improvements in consumer products.

There seems to be this assumption (not necessarily from Toms - more referring to the comment-sphere here) that when Intel finally releases 10nm we'll see meaningful performance improvements from Intel and they'll be ready to "compete" again. That's far from a given though! The 14nm process is so mature now and significantly better than it was at release. If 10nm folllows a similar trajectory we shouldn't be at all surprised if we have to wait until 2nd generation 10nm before we see parts that have equivalent (let alone better!) performance characteristics to the now mature 14nm products.

Of course, the 7nm process AMD is relying on is also unknown at this point, but the fabs are spruiking sizeable performance gains with 7nm. Interesting times for sure.
 

AgentLozen

Distinguished
May 2, 2011
527
12
19,015
Good analysis rhysiam.



Broadwell wasn't super duper, but it was immediately followed by Skylake a few months later. You may be right that we'll need to wait for a 2nd generation of 10nm chips to see a real benefit, but it may not be far away after the first generation launches.
 

InvalidError

Titan
Moderator

Broadwell wasn't even meant to be a desktop part. It wasn't until after outrage broke out in enthusiast circles about Broadwell being portable and embedded only product (middle finger to 90-series board owners who were expecting something more than Haswell-Refresh to put on there) that Intel announced a very limited selection of socketed variants with nearly nonexistent availability through most of its market life.

Cannonlake appears to be in a very similar situation: delayed multiple times, starts shipping to portable and embedded device manufacturers over a year ahead of any probable consumer launch and a hypothetical launch date on a collision course with the next-gen products beyond it.

If AMD's foundry partners meet performance targets with 7nm and Ryzen 3000, things are going to get real awkward for Intel.
 
"Recently leaked roadmaps imply that Intel won't have 10nm server products on the market until mid-2020"

The biggest issue for Intel is that by then AMD is supposed to be on the Zen 3 core. AMD has stated that the Rome processor was designed to compete with Intel's Ice Lake (new architecture with higher IPC) not Coffee Lake or Cannon Lake (current architecture). That means that the successor to Rome will be going against Ice Lake and AMD is shooting for 10-15% increases in performance each generation. We have already seen that with Zen+ AMD was able to increase performance by 10% via a 3% IPC boost and other enhancements to clocks. With the 3% IPC boost that means that Zen+ is only has a 2-7% lower IPC than Intel and Zen 2 is looking at a 10-15% IPC boost over Zen+. Needless to say this is going to be an interesting couple of years for the consumer.
 

Patrick_1966

Reputable
Jan 19, 2016
12
0
4,510
As with the 14nm cores it will take more then a year for this to be resolved into products you can buy at Best buy off the shelf. OEMs are just getting samples now, they still need to design, test and manufacture at scale. So expect 10nm to be come commonly available around Late August 2020 or early spring April or May 2021.
 


Thats the real question though. Many times have companies promised performance gains only for it to be nothing really or possibly worse. We will have to wait for both to see the true performance gains.

Normally die shrinks don't increase performance in and of itself. Normally its clock speeds that can go up.

Guess we will see how th enext year unfolds. I am more interested to see if Intel continues on the same path or plans to push a new uArch out since Core seems to be hitting a pretty thick performance wall. Its been a good run but maybe its time to move past it.
 

InvalidError

Titan
Moderator

And what makes those higher clock speeds possible? Shorter traces that reduce wire propagation delays, smaller transistors with smaller gate charge for faster switching, lower dynamic power draw from reduced parasitic capacitance and switching losses, etc. All affected quite significantly by process shrinks.

The main reason we aren't seeing substantial frequency bumps with die shrinks anymore is that CPU designers are choosing to use the faster transistors to cram more logic between synchronous latches instead of ratcheting clock frequencies at any cost. Doing more work per clock cycle is more power-efficient, clock frequencies get whatever bump leftover timing margins can afford. No generation transition shows this better than Prescott to Core where Core made on a more mature 65nm process than Prescott clocking ~800MHz lower but still destroying Prescott on performance while using significantly less power.

More work per clock is where everyone's focus is at because bumping clock frequencies at any cost has been a monumental failure for everyone who's tried it.
 

mlee 2500

Honorable
Oct 20, 2014
298
6
10,785
That's an excellent point I hadn't considered, but you may absolutely be right.

Interestingly, even today, cores found on the very mature 14nm process you mention are only ~30% faster then those found on 22nm silicon from 2012. Sure, there are a couple more of those cores squeezed onto the chip, but for most desktop users that doesn't translate into noticeable value.

I'd hoped Cannon Lake would finally represent the *truly* generational leap in per-core performance that would make upgrading even Ivy Bridge era products worthwhile, but first-gen 10nm may still not be it.




 

bit_user

Polypheme
Ambassador

Isn't leakage now supposed to be getting worse with each new generation? Is it conceivable that smaller nodes could even lose ground on power efficiency?
 

InvalidError

Titan
Moderator

Conventional leakage has been a concern for a long time already and much of it gets offset by lower voltages.

What is new is quantum physics becoming a concern. For example, enough insulation to limit leakage through insulation (conventional electrical leakage from imperfect insulation) isn't good enough when the probability function of electrons "teleporting" through the insulation (quantum tunneling) increases from distances getting smaller. Chip makers will either need to find a way to use materials that are less susceptible to tunneling or a way to exploit tunneling and other quantum effects that are undesirable in conventional circuit design.

While leakage and tunneling may be similar in that they cause an increase in static power due to letting some current to pass without doing any useful work, I suspect quantum tunneling is going to be much more difficult to solve if at all possible.
 

bit_user

Polypheme
Ambassador

I read some knowledgeable-sounding speculation that chips would soon reach a point where it'd get so bad that much of the chip would have to sit idle (powered off?). While that might not be too bad for CPUs, which are often IPC-limited, it could be a bigger issue for chips like GPUs, which are architected to minimize blocking.
 

InvalidError

Titan
Moderator

If you reach a point where you have to power down most of the chip to keep leakage+tunneling power manageable, then you may as well either pare down the chip or back off on the die shrinking - there is no point in making bigger chips on a smaller process if you can't actually use most of it.

As for GPUs being "designed to minimize blocking", this is a silly statement: GPUs have EMBRACED the fact that their individual threads will inevitably get starved for data a significant portion of the time. Unlike desktop CPUs which devote a large amount of resources to extracting all possible ILP out of one or two instruction streams per core because typical desktop code has limited threadability, GPUs leverage the fact that they are intended for embarrassingly parallel tasks to cover individual thread stalls by increasing the thread count per core so shader units almost always have other threads with eligible instructions to work on. If typical desktop software was heavily threaded by nature, it would be much simpler and more power-efficient for AMD and Intel to do 8-SMT (ex.: 4C32T) than deep out-of-order speculative execution to keep most of each core's execution ports busy as they do in some HPC and server CPUs. (Ex.: Knight's Landing, UltraSparc Tx and POWER7.)

Software is the main thing CPUs are 'blocking' on. A huge chunk of everyday desktop stuff simply doesn't multi-thread well.
 
Based on things like leakage and tunneling and unless there's a discovery made to mitigate or minimize those effects though materials or design, I'm beginning to think we're getting close to approaching the limit to die shrinks. I'm guessing it's somewhere around 5nm. Since that's just a guess and based on current tech, how low can they effectively go? While I know AMD has already moved onto their 7nm node and Intel will likely do the same at some point, where's the end? How much smaller can they get?
 

bit_user

Polypheme
Ambassador

You're just trying to find some way of disagreeing with me, again.

I stand by what I said, but I'll reword it for you: GPUs are designed to minimize stalling the hardware, by queuing up loads of work to do.
 


5nm is what most industry experts state Silicon will be limited to and Intel and others have already, for probably 10 years now or more, been researching new ideas such as Graphene, carbon nanotubes and other materials to eventually replace silicon to delve beyond 5nm.
 


I figured as much. There's going to be a point where the law of diminishing returns sets in, be that based on silicon, graphene, or what have you as a medium. I don't personally think going below 5nm is going to continue producing the kind of results we've seen with each shrink in process. The current limit may be 5nm based on the tech of today or the immediate future but even with new, I'm thinking 3nm is the relatively fixed end point. I suppose we'll see soon enough and likely within the next 10 years.
 


At one point I assumed what might actually happen was transistor stacking, that we would stop shrinking and move up. Intel also has the idea to do multiple nodes on a single die, such as the core at say 7nm, the IMC at 14nm and the cache/IGP at 22nm/32nm as it would save on costs and efficiency as some parts see less returns at certain nodes but I have not sen much more on that as I assume it would take a lot of work.

I think we will hit a wall that wont be broken unless they find a way to implement graphene which looks to be vastly more promising than even carbon nanotubes.
 

bit_user

Polypheme
Ambassador

Don't they already have on the order of 100 layers of material, per die? I know there's 3D wiring, to some extent, but are all the transistors still in the same plane?

Anyway, you can't stack very deep before heat dissipation becomes a problem (i.e. in terms of the W per mm^2 you're trying to dissipate). Another issue would seem to be the amount of fabrication time per wafer. Also, I wonder if nonuniformities in lower layers would probably compound to perturb higher layers, increasing the defect rate.
 


I am sure heat is the biggest issue and I am sure plenty of companies are still "looking" into it though as there may be some level of viability to it, possibly have two layers of cores etc.

I think the only viability is really in cache or memory stacking.
 

mlee 2500

Honorable
Oct 20, 2014
298
6
10,785


Of course none of these dimensional approaches actually solve the limitations associated with the actual substrate...they just delay the inevitable. I too wonder when we will finally see different material used (whatever happened to Gallium Arsenide?), but I imagine it's fabulously expensive to retool a fab at that level, assuming you even had a viable alternative that doesn't involve exceedingly rare, expensive, or poisonous elements.
 

bit_user

Polypheme
Ambassador

I imagine chips built from precisely-arranged atomic structures. To build such a chip, the first thing to do is to crank out the set of nano-machinery that will assemble it. Each chip will need an army of such nanobots.
 


Gallium is rare and arsenide, well its poisonous.
 

mlee 2500

Honorable
Oct 20, 2014
298
6
10,785


Yeah, a quick google shows that it's also expensive, on the order of thousands of dollars more per wafer then sand. Looks like that makes it the purview of niche or military applications.
 

InvalidError

Titan
Moderator

GaAs has been used in many consumer goods for things like microwave RF amplifiers. We're talking tiny chips with die areas in the single digit sqmm here, not the 100+sqmm of a typical consumer CPU.

Another reason why GaAs isn't particularly popular in consumer electronics is that GaAs tends to have higher leakage current than silicon, which would be bad for power efficiency and cooling.

Also, where mass-manufacturing is concerned, silicon wafers are available in sizes up to 450mm diameter while GaAs only goes up to something like 200mm. You'd end up with much higher wafer handling overheads and wafer boundary losses.

At any rate, whatever might come after silicon will still be butting heads against quantum mechanics at sub-10nm and I suspect that's what ultimately will decide whether silicon will ever get replaced by anything else as the default substrate of choice for bulk logic circuitry. If new materials can't mitigate undesirable quantum effects better and/or cheaper than what can be achieved on silicon, then silicon won't be going anywhere.
 
Status
Not open for further replies.