News Intel Announces Delay to 7nm Processors, Now One Year Behind Expectations

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Decisions like these are years in the making. This process certainly started well before Intel knew 7nm was having problems, so this announcement has nothing to do with Apple leaving x86. Apple customers don't care what CPU is in their system, as demonstrated by how many times Apple has changed architectures over the years. Apple's move to ARM is all about them having total control over their platform and better margins.

I never said this was the only reason Apple left Intel.
Sure what you said is also true, theres no better time to run away to arm and take full control of the ecosystem, again. Which may also prevent, to some degree, the Hackintosh issue.
But anynone can see a company like Apple would not stay with someone like intel that has been stuck on the same node. Not all apple consumers cares about the CPU inside, but I bet some of them do.
And Apple does not like that theres a new boy in town thats the main competition and better than what it has inside.
AMD with its 16 cores/ 24 threads desktop CPU (R9 3950X) have really made a mess on the whole intel line up. Some people fail to see that, but for me thats was the biggest launch of all. A desktop CPU that has managed to kill many, if not all of the HEDT cpus intel have to offer in lots of workloads. That alone will scare away some clients (like Apple) add the the fact Intel still has nothing to answer yet for me is case close. Well that and the new Threadrippers.
So yeah, not a bad time to run away.
 
You must be living in an alternative reality from the rest of us. How is 10nm pretty well sorted out when we were supposed to see 10nm desktop CPU's in 2016, and midway through 2020, Intel has just announced we may (or may not) see the first chips in another year? It is impossible to give Intel even the remotest benefit of a doubt until they are selling competitive non-mobile 10nm CPU's. Let's not ignore that even in the mobile space, Intel's 10nm CPU's are not their flagship, top performing models. Those are still 14nm chips.
Who promised you that? Problems they had have been sorted out - Alder Lake has NOT had a release date other then "2021" - so it's on track - about less than a year from Rocket Lake. See that record profit? Plenty competitive - process isn't a panacea.

Tiger Lake will launch shortly - and in the Lenovo leaks on their new Renoir and almost identical Tiger Lake laptops that the 4800U with 8 cores and 16 threads hold a 6 (six) percent advantage over Tiger Lake's 4 cores and 8 threads - and a 35% advantage in single core - that's significant IPC increase over Ice Lake. Rocket Lake S is Tiger Lake / Willow Cove with a modified cache structure.

AMD is dominant with it's 2020 APU going against an almost year old Ice Lake from 2019. 2020 vs 2020 will not be pretty for AMD - Maybe AMD should have prioritized more of that Ancient Vega instead of the gimmicky 8 cores - more IGP (useful) and less Cores (useless)
 
I never said this was the only reason Apple left Intel.
Sure what you said is also true, theres no better time to run away to arm and take full control of the ecosystem, again. Which may also prevent, to some degree, the Hackintosh issue.
But anynone can see a company like Apple would not stay with someone like intel that has been stuck on the same node. Not all apple consumers cares about the CPU inside, but I bet some of them do.
And Apple does not like that theres a new boy in town thats the main competition and better than what it has inside.
AMD with its 16 cores/ 24 threads desktop CPU (R9 3950X) have really made a mess on the whole intel line up. Some people fail to see that, but for me thats was the biggest launch of all. A desktop CPU that has managed to kill many, if not all of the HEDT cpus intel have to offer in lots of workloads. That alone will scare away some clients (like Apple) add the the fact Intel still has nothing to answer yes and for me is case close. Well that and the new Threadrippers.
So yeah, not a bad time to run away.
Apple has had plans to become more vertically integrated for years - funny how all the support info for AMD GPUs has been removed - Apple going to run away from AMD as well.

This is all about Apple wanting to become vertically integrated - just like Tesla does with the Gigafactories - run by Panasonic 100% for Tesla. Controlling the supply chain is not about "running away" from one vendor or another, it;s about reducing the number of vendors period.

And if they are killing Intel HEDT then must have really been killing AMD.

So AMD planning on selling more than 50 of those threadrippers - they haven't really caught on yet - and more of the same that no one wants doesn't seem like a sound strategy.
 
I used to work for IBM. I was brought into Intel to solve a programming problem, their staff should have solved. At the time, Intel employees would not work on anything that wasn't in their "performance plan". I commented, "how the hell can you project what problems you'll have a year in advance?"

Does the CEO, know how their performance plans work?
 
  • Like
Reactions: bit_user
10nm seems pretty well sorted out - not sure what you read that indicates otherwise.
Simple: extremely limited number of products shipping from Intel's 10nm fabs, mostly (extremely) high margins parts and parts Intel absolutely needs to save face. Translation: 10nm yields still aren't good enough to ship cheaper parts in high volume.

Intel's major strength until ~10 years ago was its "copy exactly" fab strategy. If 10nm was really working well and 7nm faceplanted, Intel could likely flip a couple of 14nm into 10nm fabs within months to improve its market standing and make up some ground on volume.

Also, Intel is still struggling to catch up with demand for 14nm parts. If 10nm was at least on par with 14nm for yields, Intel would be upgrading fabs to 10nm and shifting clients to 10nm devices on hopes of increased yields in the near future to help with catching up instead of contemplating yet another increase in 14nm fab capacity.

10nm may be doing better than it did before but still not well enough to replace 14nm for high-volume low-cost parts.
 
Decisions like these are years in the making. This process certainly started well before Intel knew 7nm was having problems, so this announcement has nothing to do with Apple leaving x86. Apple customers don't care what CPU is in their system, as demonstrated by how many times Apple has changed architectures over the years. Apple's move to ARM is all about them having total control over their platform and better margins.
True. It probably gives them quite a bit more control over customizing chips specifically for their devices, which IMO is a HUGE advantage - imagine how happy Dell would be if they could custom deign each CPU for each one of their laptops. And of course, it locks people into the ecosystem more. yay (/s)
 
my god what is with this deicidium? going by this : https://www.pcgamer.com/intel-cpu-roadmap-all-the-lakes-from-14nm-to-7nm/ intel 10nm was supposed to be launch in 2016, via Cannon Lake :

Ah yes, the ephemeral Cannon Lake, Intel's first 10nm processor design. There's so much to say about this one, so bear with me. Originally intended to launch in 2016, first demonstrated in 2017, and first shipped in very limited quantities in May 2018, Cannon Lake had more than a few issues. Intel's Cannon Lake page (which is linked from the Core i3-8121U, the only Cannon Lake CPU as far as we're aware), doesn't even exist. But the CPU did in fact ship, and don't you dare say otherwise! (That puts CNL one step ahead of Tejas, the last iteration of NetBurst that tapped out and then never saw the light of day.)

How bad was Intel's first stab at 10nm? The company has downplayed problems, but let's look at the facts. Intel released a 2-core/4-thread 'mobile' design, with the GPU portion of the chip disabled. Starting with a smaller chip is common for new process nodes, but disabling the integrated GPU in a mobile product speaks volumes. It was likely necessary to improve the number of functional chips Intel could get, which suggests incredibly poor yields. And even then, performance and power did not look good.

Cannon Lake does include AVX512 instruction support, which can help in a few specific instances, but everything else is basically bad. Power, memory latency, and other elements were worse than with existing 14nm mobile designs. In retrospect, the difficulties caused by all the enhancements originally stuffed into Intel's 10nm process far outweighed the potential benefits. Cannon Lake was also supposed to debut Gen10 Intel Graphics, but since the GPU was disabled Gen10 effectively turned into vaporware.

this https://www.tweaktown.com/news/4511...ls-10nm-cannonlake-skylake-q3-2016/index.html also seems to reiterate that.

the fact that intel STILL cant ship 10nm cpus with reasonable volumes, and has announced NO desktop 10nm till mid NEXT year, should tell you that 10nm has NOT been sorted out completely. why does he still insist other wise ?
 
Last edited:
my god what is with this deicidium?

the fact that intel STILL cant ship 10nm cpus with reasonable volumes, and has announced NO desktop 10nm till mid NEXT year, should tell you that 10nm has NOT been sorted out completely. why does he still insist other wise ?
He is a troll. He was banned a couple of times but one of the moderator thinks he is funny and keep bringing him back.
 
  • Like
Reactions: 2Be_or_Not2Be
Let’s not forget that Intel has yet to meaningfully launch anything on 10nm
Ice Lake-based laptops have been shipping for nearly a year.

There is at least a 50-55% IPC increase on average (1.18x1.10x1.15=1.49x, 1.18x1.12x1.17=1.55x) between Skylake and Golden Cove. And in some applications like tile-based rendering (Blender, Cinema4d, 7Zip, etc) we are really looking for at least 70-75% improvement (1.25x1.15x1.20=1.725x)
That seems a little hard to believe. So, how many instructions does that have them retiring per cycle?
 
People need to realize that Intel was never more than a mediocre chip designer.
Source?

For all these years, the only reason Intel was competitive with their competition was because they led in the manufacturing process. This compensated for their short coming in CPU design.
Except, for the past couple years, they've lost the manufacturing lead and yet their cores still manage to remain competitive. You don't accomplish that by being "never more than a mediocre chip designer".

It certainly seems clear why Apple jumped ship.
Apple probably jumped ship for a few reasons. First, ARM is inherently more efficient, giving it a big advantage in phones, tablets, and laptops. In laptops, more efficiency means longer battery life and less thermal throttling.

Second might be cost, but I don't know if Apple's volumes are high enough to amortize all their engineering costs.

Third, Apple reached performance-parity with Skylake, so there's no real downside. The upside of moving to ARM is that they can eventually have one ISA for all of their devices.
 
Third, Apple reached performance-parity with Skylake, so there's no real downside. The upside of moving to ARM is that they can eventually have one ISA for all of their devices.

Getting their AI accelerator into the Mac was probably one of the key reasons. People expect Apple technologies to be revolutionary. Slipping extra CPU cores into the box doesn't generate enough hype.
 
Except, for the past couple years, they've lost the manufacturing lead and yet their cores still manage to remain competitive.
Mainly because it had such a huge performance lead over AMD and has been sitting on it since Skylake. Without Ryzen to kick Intel's lazy behind into gear, we may still be hearing that Ice Lake and 10nm aren't quite ready yet instead of Intel scrambling to get its 7nm sorted out.
 
I am not sure when you mentioned on par, what aspect are you implying? I feel from a density standpoint, 10nm was supposed to be better than TSMC's 7nm. But from a yield standpoint, I feel it is still quite far behind. While yield has improved from the first generation 10nm fab, it still does not look good. Which is why I feel Intel is keen to quickly move on to 7nm.
The yield standpoint comes from a monolithic design versus a chiplet design. When there are 10 problems on a certain surface the whole chip may be trashed. When you can make 10 chiplets o that same surface with the same 10 mistakes there may be one chiplet to get thrown out, the rest may be sold cheaper.
 
Intel did not sleep for 10 years,they kept their tech close enough to AMDs offerings to not completely wipe them out,because monopoly laws...just look at what happened to AT&T, and guess what,that's exactly what they keep doing right now,intel has doubled their net income in the last two years,a thing they could have done 10 years ago if they started to add cores back then which would have wiped out AMD completely.
When the owners (shareholders are the owners) are so unhappy they are willing to sell at 12 to 17% below the buying price something must be seriously wrong, despite the turnover past quarter.

Shareprice is based on future expectations, and they are apparently not very good.
 
  • Like
Reactions: bit_user
First, ARM is inherently more efficient, giving it a big advantage in phones, tablets, and laptops. In laptops, more efficiency means longer battery life and less thermal throttling.
The only things we have that would be an apples to apples comparison is on server hardware. On those comparisons the ARM chips have the same TDP but lower clocks and single threaded performance of about 1/2-2/3 the Epyc or Xeon equivalents. They are able to pull close on massively threaded applications where the SMT4 is able to help. I don't think that means ARM is more efficient at the higher power levels.
 
Its hard to trust a company so obviously in neurotic denial.

Ironically, maybe its a bright spot for them. You cant fix a problem you dont own.
These are good points. For independent foundries, the need to have more transparency with customers and make sure they can deliver on commitments might foster better internal communication and accountability.
 
intel's kludgy 28 core "me too" top cpu, cant compete w/ AMD's cheap to make and better 64 core cpuS at 10nm / 7nm either.
You mean their 56-core solution, with 2x 28-core dies stuffed in a single package?

...because I'm sure the 28-core design pre-dated anything they knew about Zen or AMD's multi-core strategy.

The prior generation had 22 cores, and really just limited by its interconnect. 28-cores was on the trajectory, and enabled by their mesh interconnect (a very common and scalable solution - not a kludge).

They have committed a catastrophic blunder in creating this opportunity .... taking their customers for granted, and ~hibernating for a decade.
There server chips followed a reasonably aggressive curve, until the past couple years, when they stalled due to process delays. I'm sure they didn't plan for 28-cores still to be their biggest server die, at this point in time.
 
No wonder why Apple left the boat... bet they knew this at least one month ago when they decided to kick intel out of their house.
Apple has been planning that for several years, and the timing was certainly just a coincidence. It's possible their plans were accelerated by some of Intel's earlier 10 nm schedule-slips.

The first thing they needed to do was have a chip that was competitive, on the performance front. Then, they needed to transition their entire OS, APIs, internal apps, and software ecosystem, as well as ensure that emulation is a solid option for any apps that don't have native versions.

That's a completely massive amount of work. Microsoft has been working on ARM support for more than a decade, and they're still not at that level. Nobody else has done anything even close. You could talk about Android, but that was built to be CPU-agnostic from the ground-up.
 
Intel is the underdog now.
They're still at least 10x the size of AMD, still lead in single-thread performance, and still play in more markets than AMD. So, you're counting them out too quickly.

They are still in nowhere near as bad shape as they were with the Pentium 4 / Prescott era. That thing was truly non-competitive against AMD's Opteron, whereas Intel is currently competitive in most market segments (except server/workstation).
 
Turkish: What's happening with them sausages, Charlie?
Sausage Charlie: Five minutes, Turkish.
Turkish: [Stares at Charlie in disbelief] Hang on, it was two minutes, five minutes ago!

On a more serious note, they could just move to Switzerland, the flag would be a big plus..
Thanks for that! We need more humor, around this place.
 
  • Like
Reactions: sstanic
Apple left all the X86 design altogether. has nothing to do with intel problems.
Sure....
It's probably not nothing to do with Intel, but Apple probably figured that since they're already building their own chips, they might as well eliminate their external dependence on such a pivotal piece of their products.

And at the time they made the decision to go in this direction, Zen1 hadn't hit the streets. So, they couldn't necessarily count on AMD as a viable alternative. That meant they were basically locked into Intel, and therefore had no real pricing leverage.