News Intel Says Process Tech to Lag Competitors Until Late 2021, Will Regain Lead with 5nm

JayNor

Prominent
May 31, 2019
214
33
610
0
Intel apparently has not reached process parity with tsmc on yields. Probably the AMD chiplet architecture contributes to lower cost.

AMD apparently has not reached process parity with Intel 14nm in terms of clock rates, as you can see from the 5.3GHz boost rates.

On the cost ... Swan has repeated that their 14nm chips are very profitable since much of the 14nm equipment is fully depreciated. Intel continues to report higher margins, so ...
 
Reactions: mcgge1360 and Gurg

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
...
AMD apparently has not reached process parity with Intel 14nm in terms of clock rates, as you can see from the 5.3GHz boost rates.
...
Let's wait and see how these 5.3 leaks / rumours perform ... I'm thinking the 5.3 is their new level three boost only that will work as well as Ryzen's 4.7 boost ... you get a 0.5 second boost, because it would be thermal throttle city if it lasted much longer.

Keep in mind Intel would never have hit 5.0+ on 14nm if, 1) 10nm worked initially as intended, and 2) AMD didn't suddenly became an aggressive competitor. Had 10nm worked out 3-4 years ago, the increase in IPC would have kept it well ahead of Ryzen one launch, even if clocks stayed around the 4ghz mark (as Intel's 14nm was at the time) ... then Intel would have moved to 10nm. But that didn't happen ... 10nm didn't work and Intel needed to do whatever they could to help stave off zen and zen plus, so focused on refining, and focused on refining, and focused on refining the same architecture ... this wasn't a plan, this isn't "normal" - it was a result of being forced into it due to a chaotic situation. I think everyone would have laughed (including Intel) if in 2016, one said that Intel's desktop CPUs would be pulling up to 300w at the processor (rumour alert) - ... the company that was making some incredibly efficient CPUs at the time. But here we are ... it certainly wasn't what they planned, but it had the nice side effect of keeping a few gamers happy.

So I see the 5.0ghz+ frequencies they are hitting as an exception ... not any rule. 10nm will never hit that clock, Zen3 will never hit that clock and I doubt any smaller nodes from this point on ever will as well - from either camp.


At the end of the day though 7nm, 10nm, 14nm, 5.3 ghz are all just labels ... the only thing that is really going to count is performance for a given task, performance per dollar, and the amount of extra money you need to spend on cooling to get that performance (contributing to performance per dollar) - that's what drives sales for the most part. I, mean big numbers on a box do as well, but who wants to admit that they make their purchasing decisions based on marketing tactics as opposed to actual product value based on performance for a given need?
 
Intel disclosed that it won't be able to match competing process tech until 2021, and process leadership won't come until the 5nm node.

Intel Says Process Tech to Lag Competitors Until Late 2021, Will Regain Lead with 5nm : Read more
Given their rapid, on-time perfect 10 mm desktop CPU deployment, I personally and officially believe everything Intel says, without reservation!!! :/ (I mean, who would EVER doubt them going to 7 nm in a year, and 5 nm a few months later?)
 

truerock

Distinguished
Jul 28, 2006
223
5
18,695
1
To me, where various foundries are on their production cycle is not particularly important from a strategic, long-term standpoint.

More important is where are the manufacturers of photo-lithography equipment in their current development cycle.

I think that is why Intel may be the better long-term bet.
 

JayNor

Prominent
May 31, 2019
214
33
610
0
I, mean big numbers on a box do as well, but who wants to admit that they make their purchasing decisions based on marketing tactics as opposed to actual product value based on performance for a given need?
I didn't see AMD make much of a dent in server sales last quarter, so apparently the purchasing decisions were made on something other than what AMD was offering. Maybe avx512, optane support, dlboost and higher boost clocks were the difference. Or, maybe Intel's large 14nm capacity just lets them supply the chip orders that tsmc had no room for.
 

mcgge1360

Reputable
Oct 3, 2017
114
3
4,685
0
At the end of the day though 7nm, 10nm, 14nm, 5.3 ghz are all just labels ... the only thing that is really going to count is performance for a given task, performance per dollar, and the amount of extra money you need to spend on cooling to get that performance
Intel's 14nm actually performs better than the 10nm because it's so well optimized. Only thing 10nm has on it is lower power draw, which is why 10nm is in laptops and desktops still are using 14nm. We can forget about nm=performance because like you said, they're all just labels, they don't really mean anything to the consumer. If you want to know how good a CPU is, look at REAL WORLD benchmarks. If you play games, look at game benchmarks. If you look at memes all day look up web benchmarks. No point finding out how well it renders 3D models if that's not something you do (cough cough AMD using cinebench showing that they're "better at everything")
 
(cough cough AMD using cinebench showing that they're "better at everything")
And actually only Cinebench R15 because that's what AMD designed ryzen towards in R20 the performance drop is already noticable.
Seems all Intel can do is manage expectations. They will not catch up anytime soon and with their stupid architecture of throwing everything on the die they’ll never catch up
Well intel's stupid architecture allows them to use less wafer space for their whole CPU plus the giant (in dimensions at least) iGPU than AMD needs only for the cores plus I/O.
Complete 9900k= ~178nm2
Ryzen 3rd Gen Eight Core Zen 2 chiplet 10.53 x 7.67 mm = 80.80 mm2, IO die is 13.16mm x 9.32 mm = 122.63 mm2 ,together more than 200nm2.
https://www.chiphell.com/thread-1919758-1-1.html
https://www.anandtech.com/show/13829/amd-ryzen-3rd-generation-zen-2-pcie-4-eight-core
 

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
Intel's 14nm actually performs better than the 10nm because it's so well optimized.
Or ... because 10nm is broken and will never work like they hoped ... so critical refinement of 14nm was the option they had.

"Look, this isn't just going to be the best node that Intel has ever had. It's going to be less productive than 14nm, less productive than 22nm, but we're excited about the improvements that we're seeing and we expect to start the 7nm period with a much better profile of performance over that starting at the end of 2021." - Intel

No point finding out how well it renders 3D models if that's not something you do (cough cough AMD using cinebench showing that they're "better at everything")
Actually, any full load benchmark that saturates all cores and threads give a great indication of the maximum performance for multi-tasking, and a plethora of other workloads. You might be surprised how many people actually use blender just for hobby / amateur / educational purposes, or have to edit video and encode for their youtube channel. The cinebench scores scale almost linearly with all these workloads. You know, enthusiast type workloads ...

It was only a few short years ago, that Intel was promoting cinebench as one of it's "prize" scoring benchmarks ... the only thing that changed was Intel's marketing ... and the parroting that followed.

If you play games, look at game benchmarks.
Sure, Techspot has a great article showcasing 3950x vs 9900ks with equally tuned ram on both chips ... I think the difference was 4% IF you bottlenecked the CPU (2080ti, etc) .. .and it would be 0% if any GPU less than 2080ti was used. https://www.techspot.com/review/1955-ryzen-3950x-vs-core-i9-9900ks-gaming/ Here it is in case you haven't seen it ...

In fully saturated all core workloads though the Ryzen comes up to 200% faster. So equal gaming performance if you don't own a 2080ti, and a not noticeable difference if you do.

I'm willing to trade 200% all core performance improvment for a 4% deficit in a game even if I did own a 2080ti ... But maybe that's just me.

I think the "you need Intel for gaming!" meme is dying off ... due to the fact very few people own a 2080ti, and the benchmarks only use this card to try to induce an artificial situation that almost no one games in, because that's the only way any difference can be seen between CPUs ... any less card than that will net pretty much equal performance, which is why the benchmarks never use anything less. People are starting to figure this out.
 
Last edited:

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
... Or, maybe Intel's large 14nm capacity just lets them supply the chip orders that tsmc had no room for.
What? ...

https://www.pcworld.com/article/3448396/intels-unexpected-prolonged-processor-shortage-dampens-its-record-quarter.html

https://www.tomshardware.com/news/14nm-processor-intel-shortage-9000-series,37746.html

AMD lumped server and SOC (read consoles) numbers together, consoles are down extremely low, so their server market didn't actually do horribly. People who run server farms don't magically jump to a new ecosystem the minute they make a better processor.

And you were rather grasping at a response to what I said ... not really on topic.
 
I'm just wondering if someone in Intel is screaming: "Yeah, AMD did 5.0GHz with the FX-9590, and broke past the 200W TDP mark with it back in 2013. We're NOT supposed to be trying to outdo them on power draw!!!"

In my head, I imagine the voice having a tinge of panicked hysteria.
 
Reactions: joeblowsmynose

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
I'm just wondering if someone in Intel is screaming: "Yeah, AMD did 5.0GHz with the FX-9590, and broke past the 200W TDP mark with it back in 2013. We're NOT supposed to be trying to outdo them on power draw!!!"

In my head, I imagine the voice having a tinge of panicked hysteria.
They're a bit stuck between a rock and a hard place right now, as AMD was back then. Higher clocks is the only place to go, and you can't have high clocks without high power draw at the same time.

Intel still is competitive in performance at least, but yeah, its costing them to stay there.
 
Reactions: King_V

BadBoyGreek

Honorable
May 15, 2014
656
1
11,160
99
LMAO...Intel will not catch up anytime soon. AMD will be 2nd generation 5nm by 2021, will perform as well if not better, AND still cost less vs. Intel 🆒
My thoughts exactly. Intel seems to think that AMD isn't going to make anymore advancements over the next couple of years and we obviously know that won't be the case. By the time Intel gets to 5nm, AMD will likely already be well into that process node themselves. For them to suggest that 5nm immediately equals having the lead back is laughable.
 

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
My thoughts exactly. Intel seems to think that AMD isn't going to make anymore advancements over the next couple of years and we obviously know that won't be the case. By the time Intel gets to 5nm, AMD will likely already be well into that process node themselves. For them to suggest that 5nm immediately equals having the lead back is laughable.
If Intel can get to "Intel's" 5nm by late 2021, it might be a slight lead, as that will presumably be equal in density to TSMC's 4nm, and AMD's roadmap indicates they will probably be on TSMCs 5nm at late 2021.

So, possible? Yes ... But probable? That will depend I guess on how much they learned "what not to do", with 10nm and how fast they can move past it. Recent track record though dampens that probability.
 
Last edited:

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
And actually only Cinebench R15 because that's what AMD designed ryzen towards in R20 the performance drop is already noticable.
...
No, not in single threaded ... R20 fairly consistently scores AMD (3950 vs 9900k) higher vs Intel with R20 and single thread test.
In r15 single, Intel almost always has a lead, in r20, its roughly equal, or AMD with the lead, depending on which review you read. So AMD performs better on R20 single thread score.

In the multithreaded testing between the two, R20 doesn't score quite as well, but one can easily look at any other heavy load benchmark and see that the performance differences between the two processors doesn't come down to the Cinebench versions one benchmark ran on.

Its not like Cinebench is the only application that scores Ryzen well ... there's many others.

The concept that AMD designed an entire CPU architecture around the performance of a single benchmark tool is beyond ludicrous. Don't be silly.

 
Last edited:

Ninjawithagun

Distinguished
Aug 28, 2007
739
16
19,165
74
It's ironic that Intel blew it big time. They had the lead and had the better architecture. They squandered it all away. There is no one for them to blame but themselves. They are the proverbial giant that went to sleep only to wake up and find themselves a prisoner of their own actions.
 
Reactions: joeblowsmynose

Ninjawithagun

Distinguished
Aug 28, 2007
739
16
19,165
74
Let's wait and see how these 5.3 leaks / rumours perform ... I'm thinking the 5.3 is their new level three boost only that will work as well as Ryzen's 4.7 boost ... you get a 0.5 second boost, because it would be thermal throttle city if it lasted much longer.

Keep in mind Intel would never have hit 5.0+ on 14nm if, 1) 10nm worked initially as intended, and 2) AMD didn't suddenly became an aggressive competitor. Had 10nm worked out 3-4 years ago, the increase in IPC would have kept it well ahead of Ryzen one launch, even if clocks stayed around the 4ghz mark (as Intel's 14nm was at the time) ... then Intel would have moved to 10nm. But that didn't happen ... 10nm didn't work and Intel needed to do whatever they could to help stave off zen and zen plus, so focused on refining, and focused on refining, and focused on refining the same architecture ... this wasn't a plan, this isn't "normal" - it was a result of being forced into it due to a chaotic situation. I think everyone would have laughed (including Intel) if in 2016, one said that Intel's desktop CPUs would be pulling up to 300w at the processor (rumour alert) - ... the company that was making some incredibly efficient CPUs at the time. But here we are ... it certainly wasn't what they planned, but it had the nice side effect of keeping a few gamers happy.

So I see the 5.0ghz+ frequencies they are hitting as an exception ... not any rule. 10nm will never hit that clock, Zen3 will never hit that clock and I doubt any smaller nodes from this point on ever will as well - from either camp.


At the end of the day though 7nm, 10nm, 14nm, 5.3 ghz are all just labels ... the only thing that is really going to count is performance for a given task, performance per dollar, and the amount of extra money you need to spend on cooling to get that performance (contributing to performance per dollar) - that's what drives sales for the most part. I, mean big numbers on a box do as well, but who wants to admit that they make their purchasing decisions based on marketing tactics as opposed to actual product value based on performance for a given need?
Another overlooked fact is that Intel was able to start hitting higher clocks on the 9000 series CPUs as a direct result of them converting back to using metal soldering on their CPU heat spreaders. It was a huge mistake for them to have ever used TIM. They found out the hard way.
 

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
And actually only Cinebench R15 because that's what AMD designed ryzen towards in R20 the performance drop is already noticable.

Well intel's stupid architecture allows them to use less wafer space for their whole CPU ...
Actually, I think I heard an Intel engineer touting that at one of their recent presentations. "Sure, AMD may have 64 cores, PCI 4.0, all the super computer wins, way better price / performance ratios and a CEO who knows her CPUs .... but do they have less total wafer space per CPU?! Do they? I don't think so!! Can I get a fist bump!? ... Fist bump anyone? ... Anyone? Anyone? Bueller? ... Bueller? ...."
 

joeblowsmynose

Distinguished
Jun 14, 2011
619
239
19,370
7
Another overlooked fact is that Intel was able to start hitting higher clocks on the 9000 series CPUs as a direct result of them converting back to using metal soldering on their CPU heat spreaders. It was a huge mistake for them to have ever used TIM. They found out the hard way.
Yeah that certainly helped as well. In the long run, it doesn't pay to cheap out. But at least they had that headroom to uncover after Ryzen launched.
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
59
290
13
No, not in single threaded ... R20 fairly consistently scores AMD (3950 vs 9900k) higher vs Intel with R20 and single thread test.
In r15 single, Intel almost always has a lead, in r20, its roughly equal, or AMD with the lead, depending on which review you read. So AMD performs better on R20 single thread score.

In the multithreaded testing between the two, R20 doesn't score quite as well, but one can easily look at any other heavy load benchmark and see that the performance differences between the two processors doesn't come down to the Cinebench versions one benchmark ran on.

Its not like Cinebench is the only application that scores Ryzen well ... there's many others.

The concept that AMD designed an entire CPU architecture around the performance of a single benchmark tool is beyond ludicrous. Don't be silly.

Well since I don't use my PC to run benchmarks, that is meaningless. Buy what you want, what you can afford and enjoy it. It doesn't make you smarter, live longer or be more attractive to the opposite sex. It is a consumer electronic device that will be outdated in a very short time.

All those cores bumps up against the biggest problem in computers today (and has since the very 1st multi-processor system came out) - not easy to make code parallel - simply spinning off another thread is meaningless if it's working on the same part of the process as the other threads - when you can deploy another thread or threads that work on different parts of the same problem - that's where all these extra cores become useful - Business got around the issue of poor utilization with virtual machines - and programs like Cinebench are designed for those multiple threads and cores - however, games and most applications are not. I do ZERO video encoding, so AMD can have all the cores in the world and to me it's meaningless - I am not even making great use of my i9900KF.
 

ASK THE COMMUNITY

TRENDING THREADS