AMD's Future Chips & SoC's: News, Info & Rumours.

Page 121 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

palladin9479

Distinguished
Moderator
Jul 26, 2008
3,242
0
20,860
45
True, in the short to medium term Intel has no shortage of orders and they also have a huge cash reserve. They are in no danger from a business point of view. That said, there have been some quite big shifts recently - AMD getting their act together is one thing, however with other large firms now fielding performance equal ARM processor designs (specifically talking about the Apple M1 - which can match Zen 3 in single core) and other firms also working on their own in house ARM based cpu designs for servers (Microsoft, Google etc) Intel does have some issues to deal with long term. Heck the M1 can even emulate x86-64 instructions faster than most Intel low power parts can run native.
Hmm not really, there is nothing special about the M1 other then it's done on TSMC's latest 5nm process. Intel's CPU's are done on a 10nm process with AMD's Ryzen using TSMC's 7nm. A transistor in the M1 would be ~25% the size of a transistor in any of Intel's current generation offerings, or put another way the 5nm node size can theoretically have ~400% more transistors. Now other things get in the way so it's not quite a perfect scale but that should give people an idea of the ridiculous advantage that 5nm has over 10nm.

Apple didn't design this super awesome next gen CPU, it's a typical Big:little ARM design. All Apple did was pay through the nose to be the first customers to use TSMC's newest 5nm process, something nobody else can do right now. If Intel magically fixed their process to be at 5nm, then that CPU would easily beat the M1, they haven't so their CPU's are working with a severe handicap.

The whole core count debate been going on since I joined, back then it was "you only need two cores" ect. The "core" requirement really depends on what the user is doing outside of games. Most people do some form of chat / social media / streaming while they play or even have a web browser up with maps / information and so forth. So while a game may only seem to use four cores, the use themselves could be using another 1-3 outside of the game.
 
Last edited:
...
Now other things get in the way so it's not quite a perfect scale but that should give people an idea of the ridiculous advantage that 5nm has over 10nm.
...
In terms of wafer utilization I can see that advantage...more CPU cores out of a given wafer assuming yield remains constant. Definitely speaks well for economies of scale in manufacturing, but what about performance in a conventional clock speed sense? It may have fabulous IPC, for instance, but if the very small geometry doesn't allow adequate cooling clock speeds could be very limited with performance suffering in spite of a terrific arch with fabulous IPC. That makes me wonder just what Intel's issues with 5nm...and 10nm before that...were, exactly.
 
Last edited:

hotaru.hino

Commendable
Sep 1, 2020
1,564
489
1,640
35
It may have fabulous IPC, for instance, but if the very small geometry doesn't allow adequate cooling clock speeds could be very limited with performance suffering in spite of a terrific arch with fabulous IPC. That makes me wonder just what Intel's issues with 5nm...and 10nm before that...were, exactly.
My electronics physics knowledge is probably not the best, but smaller geometries would reduce heat simply because there's less resistance (shorter conductive parts, although smaller conductors do mean more resistance sooo...) and less capacitance (parts are closer together).

The issue with Intel I'd argue is mostly company politics. However, going below 22nm hasn't been straightforward. Layouts in 22nm and before were mostly 2D (planar). Layouts afterwards started incorporating 3D elements (see https://www.extremetech.com/wp-content/uploads/2019/05/FET-Types-Samsung.png). I'd imagine it's a lot harder to not only route this out, but also to manufacture.
 
Last edited:

InvalidError

Titan
Moderator
My electronics physics knowledge is probably not the best, but smaller geometries would reduce heat simply because there's less resistance (shorter conductive parts, although smaller conductors do mean more resistance sooo...) and less capacitance (parts are closer together).
Smaller wires have higher resistance per unit length though, which offsets gains from making them shorter.

The cooling problem comes from density: if you make things 3X smaller and then pack everything 3X tighter, 3X as much stuff to cool within a given surface area. Since die shrinks do not shrink power per transistor linearly, power density goes up and cooling becomes more difficult.
 
Reactions: drea.drechsler
Apple didn't design this super awesome next gen CPU, it's a typical Big:little ARM design.
That isn't true:
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive/2

Apple license the ARM 64 instruction set, not the core - they have been designing their own implementations for years and have been notably ahead of all other ARM processors in terms of single core performance as a result (Qualcomm also design their own custom cores with the Snapdragon series, which again typically offer a performance advantage over the ARM stock core designs used by the majority of other licence holders).
 

hotaru.hino

Commendable
Sep 1, 2020
1,564
489
1,640
35
I make no pretense of knowing what's going on with Intel but "politics" usually interferes with the ability to effectively manage a response to the technical problems being confronted, not the problems themselves.
From what I can tell, the upper management at Intel hasn't really been running the company very well, creating a toxic working environment. It's kind of hard to be on the bleeding edge when you don't feel motivated to work.
 
From what I can tell, the upper management at Intel hasn't really been running the company very well, creating a toxic working environment. It's kind of hard to be on the bleeding edge when you don't feel motivated to work.
Oh I totally agree...and it's not like you feel motivated to innovate a solution to a technical problem either. Management needs to provide the constructive environment and clarity of purpose to motivate the engineers and scientists and they failed at that it seems obvious. But my question is, however Intel mismanaged themselves through these process node failures (or effective failures based on lack of high performing and/or cost effective products coming from them) there had to be technical problems that led to the failures. What were they?
 

hotaru.hino

Commendable
Sep 1, 2020
1,564
489
1,640
35
But my question is, however Intel mismanaged themselves through these process node failures (or effective failures based on lack of high performing and/or cost effective products coming from them) there had to be technical problems that led to the failures. What were they?
We'll probably never know outside of what's commonly known about going down to smaller node sizes.

But honestly, if Samsung and TSMC can get down to those node sizes while Intel is struggling, despite Intel having a vast amount of resources, I'd pin most of their issues on management and company culture over technical problems. As some measure of proof, Intel's 10nm is superior to either company's 10nm nodes, and is more closer to TSMC's first generation 7nm node in a lot of areas. See https://en.wikichip.org/wiki/10_nm_lithography_process and https://en.wikichip.org/wiki/7_nm_lithography_process

EDIT: Although you could say because Intel's 10nm is closer to TSMC's 7nm, they were probably biting off more than they could chew at the time. But again, we'll never really know.
 
Last edited:

Eximo

Titan
Ambassador
I was hoping everyone would jump on that new metric bandwagon proposed by TSMC, among others. LMC where you measure density of various typical features. Logic, Memory, and Connections and combine them into a single number. That lets you directly compare process nodes on a more performance/manufacturing metric, then some artificial, technically attainable, but not necessarily desirable minimum feature size.

You also have different node types on a single node. TSMC 7nm for CPUs is not the same 7nm they use for GPUs, using the same equipment, but very different methodologies.

As I recall from electron microscope findings by der8auer, Intel's 10nm is very close to AMDs 7nm on the CPU side of things, AMD does still have an edge on density. But only if you look at the logic transistors, if you explored other parts of the CPU, it might be surprising.
 

hotaru.hino

Commendable
Sep 1, 2020
1,564
489
1,640
35
I was hoping everyone would jump on that new metric bandwagon proposed by TSMC, among others. LMC where you measure density of various typical features. Logic, Memory, and Connections and combine them into a single number. That lets you directly compare process nodes on a more performance/manufacturing metric, then some artificial, technically attainable, but not necessarily desirable minimum feature size.

You also have different node types on a single node. TSMC 7nm for CPUs is not the same 7nm they use for GPUs, using the same equipment, but very different methodologies.

As I recall from electron microscope findings by der8auer, Intel's 10nm is very close to AMDs 7nm on the CPU side of things, AMD does still have an edge on density. But only if you look at the logic transistors, if you explored other parts of the CPU, it might be surprising.
I think at one point, IEEE or some other large electronics body wanted companies to standardize on a logical node naming well before now, but marketing was very convincing over engineers so companies kept doing whatever they wanted.
 

Eximo

Titan
Ambassador
Well the point was that once they hit 1nm they aren't very likely to try decimals or picometers, and they will have to start doing even more 3D designs so 'gate width' will no longer be the driving factor. Just a shame that certain things get stuck in peoples minds.

Also, any shoutout for GDDR5 and GDDR6. Actually Quad Data Rate. They've doubled up on the bits per cycle. GQDR1 and GQDR2?
 
Oh I totally agree...and it's not like you feel motivated to innovate a solution to a technical problem either. Management needs to provide the constructive environment and clarity of purpose to motivate the engineers and scientists and they failed at that it seems obvious. But my question is, however Intel mismanaged themselves through these process node failures (or effective failures based on lack of high performing and/or cost effective products coming from them) there had to be technical problems that led to the failures. What were they?
The big decision Intel made was thinking they could do 10nm (which is really closer to TSMC's 7nm) without needing EUV, which has obviously turned out to be a horrible mistake on Intel's part.
 
The big decision Intel made was thinking they could do 10nm (which is really closer to TSMC's 7nm) without needing EUV, which has obviously turned out to be a horrible mistake on Intel's part.
Do you have any insight on whether that was a management decision? Maybe a cost or schedule driven compromise. Or perhaps something more akin to engineering arrogance; perhaps a 'not invented here' reaction.
 

InvalidError

Titan
Moderator
Do you have any insight on whether that was a management decision? Maybe a cost or schedule driven compromise. Or perhaps something more akin to engineering arrogance; perhaps a 'not invented here' reaction.
To be fair, most other fabs' first-gen 7nm-class process are also multi-patterned DUV, this isn't an Intel-exclusive.

Intel's problem is that it thought it could get there first on its own using mostly existing equipment and the in-house expertise it got from running into brick walls with 14nm while other manufacturers simply waited for ASML to develop new DUV equipment more suitable for 10nm-and-beyond to bridge the gap until EUV equipment became available - the DUV equipment is still useful for more coarse layers like power planes, makes no sense to waste limited EUV equipment time on those.
 
....
Intel's problem is that it thought it could get there first on its own using mostly existing equipment
....
So that sets up a classic scenario: management reluctance to commit capital for investment in new equipment in the face of clear product dominance in the marketplace (let's return value to the shareholders, guys) and engineering teams (or one, at least) with a 'can-do' attitude lending that strategy the necessary support. There's more than enough blame to go around.

Considering Intel still has overwhelming market dominance I'd have to say it was the smart decision even if hemorrhaging share (record profits, shareholder have to like that). Which they currently aren't in the midst of AMD's supply problems. They know AMD completely lacks any chance of overtaking their lead anytime soon, there's more than enough time to get back in the technical/performance lead. So long as management loosens the purse strings... which one has to think they will.

AMD hasn't a serious chance to get to 50% market share if they can't gain some measure of control over their wafer supply. I've read where Apple locked up TSMC's 5nm wafer capacity. If that's true it sounds like AMD's chances with Zen 4 are nil.
 
Last edited:

InvalidError

Titan
Moderator
So that sets up a classic scenario: management reluctance to commit capital for investment in new equipment in the face of clear product dominance in the marketplace
What reluctance to buy new equipment? Back when Intel developed 14nm internally, the best Intel could have bought from external sources was 22-28nm that everyone else was stuck on for several more years. There was nothing for Intel to buy on the open market to improve its situation when it started its push for 10nm until many years later.
 
What reluctance to buy new equipment? Back when Intel developed 14nm internally, the best Intel could have bought from external sources was 22-28nm that everyone else was stuck on for several more years. There was nothing for Intel to buy on the open market to improve its situation when it started its push for 10nm until many years later.
So you're saying they had sufficient capability and capacity...or enough at least that any capital investment would be immaterial...to produce full product line at 10nm and did nothing with it?

That doesn't make sense with respect to your previous statement that "Intel's problem is that it thought it could get there first on its own using mostly existing equipment and the in-house expertise..." irrespective of how they got it.

Oh yes, and an architecture that could run on it, don't forget that. You don't find them on trees either...and takes some capital investment to get one into production I'm pretty sure.

Keep in mind my original question: what did Intel do wrong that made 10nm such a failure. That's what I'm interested in understanding. With that in mind, your response to me painted a picture of engineering teams trying to do 10nm on process equipment meant for 14nm. And that was the failure point.

BTW: I'm fully aware that process node designations like '10nm' and '7nm' are essentially marketing terms. So to me, the more revealing bit of info was your inference they were using DUV equipment for '10nm' vs. EUV. Was that intended?
 
Last edited:

InvalidError

Titan
Moderator
So you're saying they had sufficient capability and capacity...or enough at least that any capital investment would be immaterial...to produce full product line at 10nm and did nothing with it?
You missed the point. I never said that Intel wasn't going to need more equipment eventually. My comment about "existing equipment" was with regard with equipment available for purchase at the time: nothing better existed so Intel had to use internal equipment R&D for 10nm to maintain its process lead.

If Intel's bid for 10nm had paid off in time, most of Intel's fabs would have been upgraded to 10nm years ago, Ryzen's launch would have been washed away by a tsunami of Ice Lake parts and AMD would probably be seeking bankruptcy protection right now if not already liquidated.
 
... and AMD would probably be seeking bankruptcy protection right now if not already liquidated.
So now you're saying it engineering arrogance? LOL

That...i'm not so sure about. After all, the Bulldozer/Excavator arch they were currently on had long ago seen it's prime and so was essentially irrelevant even then. I think they were being kept alive purely on their game console business. And cash infusions from investors who saw something...if nothing else a patent portfolio and licensing rights to x86 that nobody else has. Intel was (and still is) in no position to take away the game consoles as they didn't have anything competitive GPU-wise.

AMD's recovery would doubtless have been much less remarkable but they'd be doing a lot better than they were. I think Zen 1, with Zen 1+ &2 in the lab, would be enough to keep the investors interested (in addition to patents and licenses) even if it wasn't enough to turn the balance sheet.
 

InvalidError

Titan
Moderator
I think Zen 1, with Zen 1+ &2 in the lab, would be enough to keep the investors interested (in addition to patents and licenses) even if it wasn't enough to turn the balance sheet.
Had Intel's 10nm been on-time and on-target for performance and yields, Zen 1 would have faced off against something closer to Ice Lake. AMD wouldn't have survived Ice Lake launching four years earlier.

I wouldn't be surprised if part of the reason why Intel took extra risks with process development was to give AMD a chance to survive so Intel could avoid becoming a strictly regulated monopoly after AMD's bankruptcy.
 
Had Intel's 10nm been on-time and on-target for performance and yields, Zen 1 would have faced off against something closer to Ice Lake. AMD wouldn't have survived Ice Lake launching four years earlier.
...
I just cant put any weight to that as AMD desktop, mobile and server product line was already so irrelevant performance-wise. People did not buy AMD products for performance, but for something else that they'd never find in buying Intel. Nothing Intel could have done would change that dynamic.

...
I wouldn't be surprised if part of the reason why Intel took extra risks with process development was to give AMD a chance to survive so Intel could avoid becoming a strictly regulated monopoly after AMD's bankruptcy.
How funny...because the course of action they took actually looks much more like low risk...especially if you consider they felt AMD had no chance to even compete on performance much less ever take the lead, which must have seemed logical at the time. After all, why should you risk heavy investments in anything, even if it could make 10nm a sure success earlier, when you can always fall back on squeezing more and more out of 14nm until it shows real promise.

It worked too, right up until Zen 3 popped that bubble.
 
Last edited:

InvalidError

Titan
Moderator
People did not buy AMD products for performance
Pretty sure AMD catching up on performance by about 40% with Ryzen was a huge part of why Ryzen's launch wasn't an abysmal failure despite premature firmware and sub-par overall compatibility. Keep in mind that AMD hadn't been profitable in years and had multiple large loans coming due at that point, so a failed Ryzen launch would likely have made AMD insolvent.
 
... so a failed Ryzen launch would likely have made AMD insolvent.
First: a failed Ryzen launch would have been AMD's failing, not Intel's succeeding with whatever their 10nm strategy actually was. Completely different.

Second, even if Ryzen wasn't as well received as it was it would have just made AMD a peachy takeover property at worst. There would have been a buyer, maybe one of the majority stake holders. Like I said, AMD's patent portfolio and x86 licensing rights make them way too sweet a property to overlook. Or to simply take apart: the licensing rights alone make them far more valuable as a whole than in pieces.

They would have lived on in one way or another.

I'm also talking prior to Ryzen...bulldozer/excavator. But even after Ryzen proved a success and IF 10nm was all Intel wanted it to be the same crowd that hung in before would still hang in there. The performance alone wasn't enough before...the performance wouldn't have been enough after.

And besides, this is all purely hypothetical. Intel failed, AMD succeded. That's indisputable now. I'm just curious why Intel failed.
 
Last edited:

ASK THE COMMUNITY