News AMD CEO Lisa Su is First Woman to Top AP CEO Pay Analysis

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
When you get a clue, and stop with the personal attacks I will respond.
Wow, are you serious? Where did I make a personal attack?

Okay, I get that it wasn't really fair of me to suggest that you'd be spinning that same story about why Intel is so great, 5-10 years from now. It's just that you seem to exhibit all the enthusiasm for Intel (and Nvidia) of a nervous investor trying to talk up the stock price. Not saying that's the case, but I do get that impression.

I don't share your optimism and it strikes me as a bit overwrought. I don't apologize for calling out when someone seems too enthusiastic.
 
Last edited:

bit_user

Polypheme
Ambassador
Yeah, managers monthly salaries in millions range and others?
Really? Managers? Which company are we talking about?

I think it's pretty rare that you'd see anyone outside the C-suite execs getting $M annual compensation, let alone monthly.

I'm all for reforming executive compensation, but when you start talking about "managers" that really seems to muddy the waters.

BTW, I'm not any sort of manager.
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
Wow, are you serious? Where did I make a personal attack?

Okay, I get that it wasn't really fair of me to suggest that you'd be spinning that same story about why Intel is so great, 5-10 years from now. It's just that you seem to exhibit all the enthusiasm for Intel (and Nvidia) of a nervous investor trying to talk up the stock price. Not saying that's the case, but I do get that impression.

I don't share your optimism and it strikes me as a bit overwrought. I don't apologize for calling out when someone seems too enthusiastic.
I don't care that we don't see eye to eye - that's great - makes it interesting. The writing is on the wall - CXL is embraced by every single major company - including AMD - the point of CXL is cache coherent structure across multiple pools of resources. May not be a factor in the business data center - but in the hyperscale environments - this is already happening. There will be a major change - and will happen soon. CXL and GenZ are both rapidly moving to implementation - the enabling tech for both is PCie5 and eventually PCIe6. CXL is inside the rack, and GenZ is between racks (cache coherent Infiniband - which may explain the Nvidia purchase of Mellanox).

I will still bet that there will be mergers. Nvidia went to AMD not because of Epyc - but because it wasn't Intel - and Nvidia knows that Intel has the reach and the resources to put a dent in the CUDA ecosystem - and that will put Nvidia at risk. AMD will not have a pony in this ring - say what you will about how good AMD GPUs are getting - great - they aren't but great. No ecosystem (and thus hardware lock in) makes them the most vulnerable. By AMD and Nvidia teaming up / merging AMD will have the currently dominant GPU and ecosystem, and Nvidia would have access to a CPU that they could influence to better work with their hardware (POWER has NVLink on die - do you not think that having NVLink on die would be beneficial?) - and the R&D could be combined, and used ALOT more efficiently - Radeon spun off or shelved. The point about Xilinx is that they have no chance in the data center at this point - Xilinx is tiny, has nothing other than FPGA - and having a partner like AMD/Nvidia would only be beneficial to all 3. At this point CPU, GPU. AI and FPGA under one roof - that could compete with Intel with CPU, GPU, AI and FPGA under one roof. From a strategic point of view - its a win win win for AMD / Nvidia / Xilinx. Intel turned CXL over to a standards body, and GenZ is under the auspices of a standards body. One way or another - Intel is dominant in CPU, has world class GPU incoming, already has the top of the pile in FPGA and with the Habana purchase - the top AI - and if need be they can license Tensor from Google. Intel is coming for AMD, Nvidia and to a lesser degree Xilinx - would be much easier to pick them off one by one - but with those 3 united and pooling resources - they would at least stand a chance. Headwinds or not - There will be a battle.

My bet is on Intel - they have been structuring for this for quite a while - and they have the market position and all pieces in the puzzle - hardware, software (OneAPI), optical networking (Omnipath was going no where) and market leading Optane SSDs (not losing money on Optane) and market changing Optane NVDIMMs.

As far as being an investor - I hold no hardware stocks and haven't for 15 years. I have significant holdings, but not in AMD or Nvidia or Intel. or any hardware company at all. I just pay attention and put the pieces together - it's all out there,

Calling out it fine, but your lack of knowledge (which you admitted to - don't know what CXL is) and just doing the equivalent of "nuh uh" doesn't make your arguments cogent or an actual rebuttal of my points. I ENJOY this stuff, and pay attention - I have probably built more AMD PCs than most people here - I don't "hate" AMD - I just don't buy into the Cinderella story that gets portrayed - Lisa Su is not magical - she is executing the basics - which is a base level of competency required of someone in C levels. She is fighting and uphill battle - and in the hyper scale environments - even more so. She is light years ahead of her predecessors at AMD.

When Tesla was getting ready for the launch of the Model S - some people actually thought that it would damage Toyota so badly that they may never recover - anyone with a clue thought that was laughable at best - no way a small company, regardless of how great their tech is, does not stand a chance at damaging a company like Toyota. It was a good story, David vs Goliath - but at the end Toyota is unfazed by Tesla. The underdog vs the Juggernaut is a cool story - but the under dog will never beat the juggernaut Tesla will never over take Toyota and AMD will never over take Intel.

So if you want to continue to have arguments about tech, that is fine, if you want to argue your points against mine - bring it. The trying to say that I am like a nervous investor trying to goose the stock price is childish. If you continue with that sort of argument, then I will have to put the 1st person here in ignore.

I am going to bed, I look forward to you reply - and I hope it has some heft,
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
Really? Managers? Which company are we talking about?

I think it's pretty rare that you'd see anyone outside the C-suite execs getting $M annual compensation, let alone monthly.

I'm all for reforming executive compensation, but when you start talking about "managers" that really seems to muddy the waters.

BTW, I'm not any sort of manager.

Yeah not managers - Sales maybe. Manager - no. Agree that the term "management" is nebulous at best
 

bit_user

Polypheme
Ambassador
I don't care that we don't see eye to eye - that's great - makes it interesting. The writing is on the wall - CXL is embraced by every single major company - including AMD - the point of CXL is cache coherent structure across multiple pools of resources.
That's not my understanding. I think CXL is aimed at direct-attached accelerators. Gen-Z is what you want for disaggregation.

Even if I'm wrong, I don't really see Intel as having a huge advantage with CXL. Intel was a founding member of the PCIe consortium, and right now AMD is pwning Intel on PCIe. Moreover, CXL piggybacks on the PCIe phy layer, as I understand it, giving anyone with such depth in PCIe a head-start with CXL.

AMD will not have a pony in this ring - say what you will about how good AMD GPUs are getting - great - they aren't but great.
I did say that AMD probably thinks their GPUs are finally starting to catch up. We don't know if they really are. RDNA finally gained some ground, but they need to repeat those gains a few more times.

Also, the AMD/Samsung partnership will be one to watch. RDNA in mobile could help build a lot of momentum. I wonder if Samsung might even build a MS Surface-competitor around one of their ARM+RDNA SoCs.

On the flip side, AMD has had a bad habit of chasing where Nvidia was, only for Nvidia to do something game-changing, like tensor cores or ray tracing.

POWER has NVLink on die - do you not think that having NVLink on die would be beneficial?
Doesn't that kind of contradict your points about CXL ruling the world? I think AMD would prefer to use standard interconnects, so that they benefit more than just one type of accelerator or peripheral.

The point about Xilinx is that they have no chance in the data center at this point - Xilinx is tiny, has nothing other than FPGA
I honestly don't know how big a role FPGAs are going to play. All I know is that Intel announced Xeons with integrated FPGAs, back in the Broadwell era, and I believe they never saw the light of day. Or, maybe they were delivered to some close partners, but I haven't heard any more about that sort of thing, since.

For specialized signal processing applications, FPGAs rule. Much beyond that, I really can't say. Purpose-built AI accelerators are better at AI. Purpose-built crypto ASICs are better at mining. I just don't know how much is left, for FPGAs.

Intel is coming for AMD, Nvidia and to a lesser degree Xilinx
I get what Intel wants to do, but you're presuming flawless execution. Need I remind you that Intel's had a number of very major failures, recently? Beyond the manufacturing realm, there's Xeon Phi, OmniPath, their mobile SoCs, Optane, Nervana, and even their ridiculous MCM Xeons don't seem to have gained any real market traction.


My point is simply that Intel doesn't succeed at everything they try. Just putting out some ambitious vision doesn't mean all will go according to plan. And the more pieces one company tries to do in-house, the more chances there are for it to go south.

My bet is on Intel - they have been structuring for this for quite a while
They've sure been restructuring, alright.

As far as being an investor - I hold no hardware stocks and haven't for 15 years.
For my part, I stopped buying individual stocks, long ago. Maybe I should put my tech knowledge towards investing, but I always focus on the tech and less on the business. My experiences with investing have shown me that I'm just not that interested in finance, and that tends to bite me.

your lack of knowledge (which you admitted to - don't know what CXL is)
That was just me expressing confusion about how you were talking about it, but I wasn't in a mood to search out a bunch of articles to see whether I'd missed something. So, I allowed for that possibility. I'm generally not one to exude confidence I don't have.

AMD will never over take Intel.
They don't have to, nor do I think they will. Intel's challenges come on many fronts:
  • Loss of the Chinese market
  • Upcoming Chinese competitors, mostly challenging them outside the US and Europe
  • Numerous AI competitors
  • ARM-based servers
  • ARM-based notebook SoCs
  • AMD CPUs eroding marketshare and margins
  • Nvidia and AMD GPUs
And that's assuming they get their manufacturing debacle sorted out and catch up to TSMC and Samsung. I think that's likely, but not a given.

The trying to say that I am like a nervous investor trying to goose the stock price is childish.
When we see someone so enthusiastic, who's obviously not just a fanboy gamer, it's natural to be a little suspicious. Especially hearing you talk about things like P/E ratios. Moreover, I'm pretty sure I've seen a few real cases of it, over the years. That said, I take you at your word.
 
Last edited:
...When we see someone so enthusiastic, who's obviously not just a fanboy gamer, it's natural to be a little suspicious. Especially hearing you talk about things like P/E ratios. Moreover, I'm pretty sure I've seen a few real cases of it, over the years. That said, I take you at your word.
Wow. You, sir, are far too kind to take on that wall of text. The apparent passion that D has for Intel is ... well, alarming is probably the best way to put it. Intel isn't going anywhere any time soon, but definitely hasn't been on its A game for the past few years. Xe Graphics and some of the other stuff Intel is trying to get into are all interesting, but we need final working product. In that sense, Intel is becoming like Google: fling poop at the wall and see what sticks.

On a different subject: Everyone in the industry knows the end of lithography scaling is coming, sooner than later. I'm not sure how small they can go and still make a working chip, but talking about nanometers is becoming increasingly meaningless. What's a 7nm chip? Well, one aspect of the chip measures 7nm, I guess, but certainly not the largest dimension. I don't have the exact numbers handy (and finding them can be a bit tricky), but Wikichip has some reasonable details:
https://en.wikichip.org/wiki/7_nm_lithography_process
https://en.wikichip.org/wiki/10_nm_lithography_process

Point of that being, a 10->7 shrink on TSMC shows some good gains, but perhaps not as good as the main numbers (ie, 7 and 10 nm) might lead you to think. If we ever have 1nm lithography, what will that mean? 1nm fin width with a fin height of 140nm? How big will the gap between traces need to be? And a copper or aluminum atom is only about 1/8 of a nm (125pm-128pm I think) in size!

So, Intel and other big microprocessor / fabrication companies are trying to find the next route forward. And even as a big sci-fi fan, I don't think quantum computing PCs are going to be anywhere near as potent and useful as some are suggesting / hoping / guessing. Another 20 years or so and lithography will probably look a lot like the current automotive industry: there will be differences in engine types, each better suited for certain types of work, but we won't be seeing doubling of performance (or density, or transistor counts, or whatever) on a regular cadence.
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
That's not my understanding. I think CXL is aimed at direct-attached accelerators. Gen-Z is what you want for disaggregation.

Even if I'm wrong, I don't really see Intel as having a huge advantage with CXL. Intel was a founding member of the PCIe consortium, and right now AMD is pwning Intel on PCIe. Moreover, CXL piggybacks on the PCIe phy layer, as I understand it, giving anyone with such depth in PCIe a head-start with CXL.

A few systems with PCIe4 does not constitute a "pwning" - it's not DOTA, it's not a game. AMD is a small bit player - and it's adoption of PCIe4 is meaninless. The PCIe4 market start with Rocket Lake S and Ice Lake SP.
Not sure what you mean by head start.


I did say that AMD probably thinks their GPUs are finally starting to catch up. We don't know if they really are. RDNA finally gained some ground, but they need to repeat those gains a few more times.

Definately not gaining ground - either in the desktop or the data center - Nvidia is absolutely dominant in the data center/

Also, the AMD/Samsung partnership will be one to watch. RDNA in mobile could help build a lot of momentum. I wonder if Samsung might even build a MS Surface-competitor around one of their ARM+RDNA SoCs.

Samsung will release 1 SOC with RDNA and then in true Samsung fashion will shelve it.

On the flip side, AMD has had a bad habit of chasing where Nvidia was, only for Nvidia to do something game-changing, like tensor cores or ray tracing.

reminds me of the last invade in Space Invaders - you have to shoot where the competition will be - not where it is.


Doesn't that kind of contradict your points about CXL ruling the world? I think AMD would prefer to use standard interconnects, so that they benefit more than just one type of accelerator or peripheral.

CXL is a standard - as standard as PCIe...
https://www.computeexpresslink.org/members


I honestly don't know how big a role FPGAs are going to play. All I know is that Intel announced Xeons with integrated FPGAs, back in the Broadwell era, and I believe they never saw the light of day. Or, maybe they were delivered to some close partners, but I haven't heard any more about that sort of thing, since.

That was one product, probably for a specific customer.

For specialized signal processing applications, FPGAs rule. Much beyond that, I really can't say. Purpose-built AI accelerators are better at AI. Purpose-built crypto ASICs are better at mining. I just don't know how much is left, for FPGAs.

FPGAs are reconfigurable to whatever config to do calcs based on customer's need - I also agree that out of the bunch, i don't see FPGAs being a huge market.

I get what Intel wants to do, but you're presuming flawless execution. Need I remind you that Intel's had a number of very major failures, recently? Beyond the manufacturing realm, there's Xeon Phi, OmniPath, their mobile SoCs, Optane, Nervana, and even their ridiculous MCM Xeons don't seem to have gained any real market traction.

Phi is ancient history. Omnipath is ancient history. Optane is firing on all cylinders. Nervana replaced with Habana. And I am sure the "ridiculous MCM Xeons" are selling quite well.


My point is simply that Intel doesn't succeed at everything they try. Just putting out some ambitious vision doesn't mean all will go according to plan. And the more pieces one company tries to do in-house, the more chances there are for it to go south.

How long was it that AMD didn't have a competitive product? 12 years?

They've sure been restructuring, alright

Yes they have - restructured the groups


For my part, I stopped buying individual stocks, long ago. Maybe I should put my tech knowledge towards investing, but I always focus on the tech and less on the business. My experiences with investing have shown me that I'm just not that interested in finance, and that tends to bite me.

I own Google and Amazon in my personal portfolio - bought before either of them made money. ~120K shares in Amazon - bought well before the round of splits. Even in the Investment/Private Equity company I am a half owner of - generally do not worry with hardware stocks. But stocks are just 1 of the 6 r 7 areas we focus on.


That was just me expressing confusion about how you were talking about it, but I wasn't in a mood to search out a bunch of articles to see whether I'd missed something. So, I allowed for that possibility. I'm generally not one to exude confidence I don't have.

Spent years in Big IT - Architect of large scale networks and Data Centers - including building out Five Nine Data Centers (99.999 uptime). Saw when MS got it's foot in the data center - watched the Opteron debacle unfold - watched virtualization start to take hold.


They don't have to, nor do I think they will. Intel's challenges come on many fronts:
  • Loss of the Chinese market'
Doesn't matter - pretty sure Intel knew this was coming

  • Upcoming Chinese competitors, mostly challenging them outside the US and Europe
Huawei? Yeah, not.
  • Numerous AI competitors
Habana is the top of the heap
  • ARM-based servers
Those have been coming for years - and are still not here
  • ARM-based notebook SoCs
Doesn't matter - nothing new, too many tradeoffs
  • AMD CPUs eroding marketshare and margins
Yeah, no. gaining little market share - Intel's margins are fine
  • Nvidia and AMD GPUs
Nvidia GPUs - Nvidia is scared of what is coming from Intel - and AMD GPUs LOL

And that's assuming they get their manufacturing debacle sorted out and catch up to TSMC and Samsung. I think that's likely, but not a given.

Not a debacle - 10nm/10nm+ is working fine - 10nm was lower volume, 10nm+ will be shipped in quite a few forms this year - Tiger Lake for Laptops/NUCs - Ice Lake SP for 1 and 2 socket servers (the vast majority of the market) the Xe HP compute cards... and will probably see the next Stratix FPGA on 10nm+.

Samsung - I love Samsung - phones, washers, dryers, etc - they can't get out of their own way - and always finds ways to snatch mediocrity from the jaws of greatness.

TSMC has come along way. But in their history - it's more failures and half baked processes than the successful 16/14/12 and the 10nm class lines. So not like they have success on lock.

When we see someone so enthusiastic, who's obviously not just a fanboy gamer, it's natural to be a little suspicious. Especially hearing you talk about things like P/E ratios. Moreover, I'm pretty sure I've seen a few real cases of it, over the years. That said, I take you at your word.

Unlike alot of people here - I spent years in Big IT. I understand finance and stocks - but no where near a passion. it's useful info, but I have people who take care of the details. Nothing wrong with enthusiasm. Not a Fanboy - that made my day - too many ppl try to shut down discussion with the fanboy label. I am not a fan of any corporation other than my 3 Class Cs.
 

Deicidium369

Permanantly banned.
BANNED
Mar 4, 2020
390
61
290
Wow. You, sir, are far too kind to take on that wall of text. The apparent passion that D has for Intel is ... well, alarming is probably the best way to put it. Intel isn't going anywhere any time soon, but definitely hasn't been on its A game for the past few years. Xe Graphics and some of the other stuff Intel is trying to get into are all interesting, but we need final working product. In that sense, Intel is becoming like Google: fling poop at the wall and see what sticks.

On a different subject: Everyone in the industry knows the end of lithography scaling is coming, sooner than later. I'm not sure how small they can go and still make a working chip, but talking about nanometers is becoming increasingly meaningless. What's a 7nm chip? Well, one aspect of the chip measures 7nm, I guess, but certainly not the largest dimension. I don't have the exact numbers handy (and finding them can be a bit tricky), but Wikichip has some reasonable details:
https://en.wikichip.org/wiki/7_nm_lithography_process
https://en.wikichip.org/wiki/10_nm_lithography_process

Point of that being, a 10->7 shrink on TSMC shows some good gains, but perhaps not as good as the main numbers (ie, 7 and 10 nm) might lead you to think. If we ever have 1nm lithography, what will that mean? 1nm fin width with a fin height of 140nm? How big will the gap between traces need to be? And a copper or aluminum atom is only about 1/8 of a nm (125pm-128pm I think) in size!

So, Intel and other big microprocessor / fabrication companies are trying to find the next route forward. And even as a big sci-fi fan, I don't think quantum computing PCs are going to be anywhere near as potent and useful as some are suggesting / hoping / guessing. Another 20 years or so and lithography will probably look a lot like the current automotive industry: there will be differences in engine types, each better suited for certain types of work, but we won't be seeing doubling of performance (or density, or transistor counts, or whatever) on a regular cadence.
So too many words - coming from a Journalist - that kinda strange. I took journalism classes while getting my BS and my 3 Masters and my PhD. Fun - kinda like English Comp classes or Creative Writing. They were a nice respite from the extremely math heavy engineering classes, and not as boring as the classes to get my MBA. None of which I ever used in my career - did use my Masters in Architecture to design my Main home and vacation homes.

"The apparent passion that JW has for AMD is ... well, alarming is probably the best way to put it."

We used to have pure R&D companies - Xerox PARC, Bellcore Labs, IBM. Out of the group only IBM is still doing pure research - and there is often MANY piles of poop flung against the wall to see what sticks - that's how it works. Intel is deep in FPGA and is shipping (Altera Stratix). AI is being shipped by Habana (hottest AI company on the planet ATM). - and the GPUs are available - and will not be i740 part 2. So other than the 5G stuff - what is Intel flinging against the wall? Optane SSDs are the best available - and Optane Memory will really debut heavily with the new platforms. Intel Networking is pretty well established - and gets included on a lot of AMD systems.

Xe LP is working - you can get the DG-1 card to get the same config as in Tiger Lake. Xe HP (the big chip Raj was holding) is likely a 16,384 core (at full 4 chip config) destined for compute cards. Won't see Xe HPC until the Exascale next year.

And TSMC still does not have a 7nm product - they have a process that marketing named 7nm - but it is a 10nm class process. I do agree with "nm" wars are meaningless and at some point it will end - electrons are a certain size and quantum tunneling is on the horizon.

Not seeing Quantum Computing being that big of a deal just yet - certainly nothing like in Devs/Deus - still pretty specialized - I am not sure we will ever see a general purpose QC like the PCs we have today.

Assuming copper conductive layer / traces - look into Cobalt - Only 1 has done the work to get Cobalt under control - and it's not TSMC or Samsung. Silicon is on borrowed time - whether it's Gallium Arsenide (Cray did this in the 80s) or something else - it won't be silicon
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
So, Intel and other big microprocessor / fabrication companies are trying to find the next route forward.
True. I assume the generational gains in semiconductor manufacturing will simply become ever smaller.

At some point, I wonder if chips will eventually be designed to wear out after a certain amount of usage. We're already hearing that current chips measurably degrade, with manufacturers having to account for this and even actively compensate for it. As things get smaller, the effect will only become worse.

So, maybe we see CPUs & GPUs more actively segregated into light/medium/heavy-duty tiers and maybe they even have some number of hours the manufacturer guarantees them for, at that usage level.

And even as a big sci-fi fan, I don't think quantum computing PCs are going to be anywhere near as potent and useful as some are suggesting / hoping / guessing.
Yeah, I'm still at the point of doubting most people will ever physically see them. As long as they remain so sensitive to interference and require even modestly sub-zero cooling, I doubt they'll be very practical or affordable for most private individuals and businesses. If cloud computing ever made sense, it sure does for quantum.

Another 20 years or so and lithography will probably look a lot like the current automotive industry: there will be differences in engine types, each better suited for certain types of work,
Yeah, it should finally be the breaking point, for x86. As manufacturing improvements plateau, there will be too much temptation for cloud and hyperscalers to cut away the overhead inherent to that ISA. We're seeing x86's market share starting to erode on all fronts - ARM-based Win 10 (and maybe Apple) laptops and ARM-based Amazon server instances... and where laptops go, desktop will eventually follow.

I think there's a lot to be gained by doing away with the distinction between instructions and uOps. Perhaps an intermediate step would be to let the CPU expose uOps, so that the OS could do things like cache the translated instruction stream and JIT compilers could generate it, natively. That's something you could still do without completely dumping x86, though.

A lot of infrastructure has been built up to support these sorts of things for GPUs. Both OpenGL/Vulkan and Direct3D have calls for translating your program and even letting you read/run an intermediate representation or in fact the native one.

The next target is probably x86's relatively small register file. And, while we're talking about registers, I'll just mention that I'm sure CPU architects would probably love to have another crack at SSE/AVX. The behavior of those extensions dates back to a time when designers were less worried about power and scalability, so you have bone-headed things like a 32-bit scalar instruction having to copy the other 480-bits in the register. That behavior dates all the way back to SSE, introduced over 20 years ago!

Finally, much has been written about the inefficiencies of the CPU cache hierarchy, which I've heard estimated at up to 85%. The low-hanging fruit that has already been picked by some ISAs, such as ARM, is to relax the memory consistency guarantees. But, there are a lot bigger efficiency gains, the more you can circumvent cache and either go to a direct-mapped on-chip memory or straight to DRAM. I could write more about this, but this post is already long enough.
 

bit_user

Polypheme
Ambassador
Ugh, quoting. Yes, it's annoying... but I'm not sure the best approach is just to disregard it. You could just type your whole reply below the quote box, if you don't want to deal with all the inline tags.

A few systems with PCIe4 does not constitute a "pwning" - it's not DOTA, it's not a game. AMD is a small bit player - and it's adoption of PCIe4 is meaninless. The PCIe4 market start with Rocket Lake S and Ice Lake SP.
It was just an example of how Intel being a founding member of PCIe doesn't automatically put them on the cutting-edge of it. Not a perfect analogy with CXL, but then it's not as if AMD doesn't have plenty of experience with cache-coherent fabrics (e.g. Infinity Fabric).

Doesn't that kind of contradict your points about CXL ruling the world? I think AMD would prefer to use standard interconnects, so that they benefit more than just one type of accelerator or peripheral.
CXL is a standard - as standard as PCIe...
https://www.computeexpresslink.org/members
Totally agree. The context was your suggestion that AMD might like to integrate NVLink into their CPUs and my answer was they'd probably rather stick with CXL.

I get what Intel wants to do, but you're presuming flawless execution. Need I remind you that Intel's had a number of very major failures, recently? Beyond the manufacturing realm, there's Xeon Phi, OmniPath, their mobile SoCs, Optane, Nervana, and even their ridiculous MCM Xeons don't seem to have gained any real market traction.

Phi is ancient history. Omnipath is ancient history. Optane is firing on all cylinders. Nervana replaced with Habana. And I am sure the "ridiculous MCM Xeons" are selling quite well.
The Anandtech link was a whole article about how uptake on the 9200 series by system vendors has basically been zero.

My point is simply that Intel doesn't succeed at everything they try. Just putting out some ambitious vision doesn't mean all will go according to plan. And the more pieces one company tries to do in-house, the more chances there are for it to go south.
How long was it that AMD didn't have a competitive product? 12 years?
I think we're agreed that AMD isn't Intel's singular threat. For Intel to succeed in the vision you painted, they basically have to get all of the pieces right, with each additional piece increasing the chance of failure.

It reminds me of the old minicomputer companies that would do 100% in-house, even to the point of writing their own operating system. You don't see that, any more. Even IBM supports Linux on most of their mainframes.

They don't have to, nor do I think they will. Intel's challenges come on many fronts:

  • Loss of the Chinese market'
Doesn't matter - pretty sure Intel knew this was coming
Knowing its coming and it not being an issue are two different things. Just look at how much of their current business is in China!

  • Upcoming Chinese competitors, mostly challenging them outside the US and Europe
Huawei? Yeah, not.
You've been around long enough to see many iterations of this same story. First, people wrote off Japan. Then, they dismissed South Korea and Taiwan. Now, you still think China won't pose a real competitive threat to any industry they choose to target?

They won't go straight for Intel's key markets. They will go for the periphery and compete on cost. From there, the threat will only build.

  • Numerous AI competitors
Habana is the top of the heap
Do you know how it compares with GA100? What about Cerebrus?

While I think Habana is good, I'm not convinced it's special.

  • ARM-based servers
Those have been coming for years - and are still not here
It's called Amazon's Gravion 2, and it's here!

For everyone else, there's Ampere Computing's eMAG. For those inside China (and other markets they own), Huawei is several generations into a comparable line.

  • ARM-based notebook SoCs
Doesn't matter - nothing new, too many tradeoffs
Intel has already warned investors that Apple's transition to ARM laptops will hit revenues.

  • AMD CPUs eroding marketshare and margins
Yeah, no. gaining little market share - Intel's margins are fine
Only because demand has outstripped what either of them can supply. That won't be true forever.

Unlike alot of people here - I spent years in Big IT. I understand finance and stocks - but no where near a passion. it's useful info, but I have people who take care of the details. Nothing wrong with enthusiasm. Not a Fanboy - that made my day - too many ppl try to shut down discussion with the fanboy label. I am not a fan of any corporation other than my 3 Class Cs.
To be clear, that was meant as an oblique compliment, of sorts. I was saying you're obviously not simply a fanboy gamer. I respect your knowledge, insight, and experience, even if we disagree in some significant areas.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
Optane SSDs are the best available - and Optane Memory will really debut heavily with the new platforms.
I touched on this, above, but I just want to break it down. Optane's failures:
  • Over-promised and under-delivered on durability & performance.
  • Late, late, late!
  • Expensive
  • Hot - less efficient than NAND.
  • Still 2D, with concerns about 3D scalability.
In light of all of that, I wouldn't pin my hopes on it.

And I freely admit that I'd like to have an Optane SSD in my PCs, but it doesn't fit the budget.

I get that datacenters are a different world, and I can believe there are a few write-intensive scenarios where it makes a lot of sense, but it was way over-hyped and has seemed to kind of fizzle.
 
So too many words - coming from a Journalist - that kinda strange. I took journalism classes while getting my BS and my 3 Masters and my PhD. Fun - kinda like English Comp classes or Creative Writing. They were a nice respite from the extremely math heavy engineering classes, and not as boring as the classes to get my MBA. None of which I ever used in my career - did use my Masters in Architecture to design my Main home and vacation homes.

"The apparent passion that JW has for AMD is ... well, alarming is probably the best way to put it."
I don't have a passion for AMD as such -- I want good competition, and I'm passionate about all interesting technologies. Right now, AMD's CPU designs are clearly ahead in some areas, though it's not a 100% success rate. Basically, Zen 2 is just more cores in a readily scalable setup (chiplets), but maximum clocks still aren't able to match Intel. Its GPUs are clearly behind Nvidia, however, and I've repeated that refrain at PCG and now Tom's.

My main complaint -- which many others have echoed -- is that your posts are constantly (literally CONSTANTLY!) pro-Intel, anti-AMD. Like your first comment on this article about Lisa Su was to throw shade at AMD's stock price. Sure, it might be too high -- or it's a sign of investor confidence right now. If you can't find a single good thing to say about AMD, that's clear bias, and that's where your posts land.

By comparison, unlike you, I've got plenty of good things I've said about AMD, Intel, and Nvidia. I've said bad things about all three as well. Your approach is always in favor of Intel, and it's old and one dimensional. Go ahead and prove me wrong, though. Just write the top five best things about AMD right now. Like this, which I'll do for Intel:

------------------------
1) Intel has excellent architecture teams. Even with a process technology that's now five years old (yes, 14nm came out in 2015), the performance of Intel's latest Comet Lake CPUs is competitive and still wins in many lightly threaded and single-threaded workloads.

2) Intel has it's fingers in a lot of pies. It doesn't just make desktop CPUs, or mobile CPUs, or server CPUs. It also has SSDs (Optane) and a ton of other areas it works in. Intel chipsets are generally superior to AMD (lack of PCIe Gen4 notwithstanding -- look how high memory clocks can scale on Z390, never mind Z490). Intel has also been a superior mobile solution for ages (but Zen 2 / Renoir is definitely closing the gap).

3) Intel has deep pockets. Right now, it's basically 4X the market cap of AMD, and 10X the quarterly earnings of AMD. Obviously it can weather several bad years quite easily -- AMD managed to survive the Bulldozer through Excavator years, and the delayed 10nm and continuing 14nm is nowhere near as bad.

4) Intel's lithography is generally ahead of its competitors at each node. So Intel's 14nm++ is basically going to match TSMC/Samsung 10nm in a lot of ways. Intel 10nm is going to basically match TSCM/Samsung 7nm. And Intel is working on newer enhancements like the Cobalt you mention -- which was almost certainly a major factor in the massive 10nm delays and yield problems.

5) Given its size, Intel has a ton of money for R&D. I don't expect Xe Graphics to beat AMD and Nvidia GPUs in a straight up head-to-head for gaming performance, and maybe not even in raw compute. But I suspect in compute it will be quite interesting at the least -- and I'm cautiously hopeful that the consumer Xe HP cards will be decent (if they're not overpriced).

So here's your chance to go nuts. Show that you're not just blindly pushing an Intel (and sometimes Nvidia) narrative. Because there's never a company that is all good or all bad.
------------------------

Also, the dig wasn't at the number of words, it was the presentation of ideas and topics -- that's the wall of text I'm referring to. You bounce into unrelated stuff as red herrings and strawmen arguments, as though the word count somehow trumps reason. There's no flow to it at all. But this time, go nuts: five things, literally any five, doesn't have to be CPU or GPU or related to the discussion at hand. Instead of belittling AMD, praise the good the company has done for a change.
 

bit_user

Polypheme
Ambassador
Basically, Zen 2 is just more cores in a readily scalable setup (chiplets), but maximum clocks still aren't able to match Intel.
It's not just more cores with a better topology. Zen2 is wider than either Zen1 or Skylake. Half-way down the page of this article, you'll see a nice chart comparing a number of micro-architectural parameters across different Intel and AMD CPU generations:


Notably, Zen2 has 11 execution ports, although it seems a little behind in decoder width.

As for how this impacts performance, Anandtech concluded:
Overall, the 3900X is 25% faster in the integer and floating point tests of the SPEC2006 suite, which corresponds to a 17% IPC increase, above AMD's officially published figures for IPC increases.
Source: https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/6

It's worth considering that AMD is pursuing two different markets with its Zen2 cores: clockspeed-sensitive and efficiency-sensitive. Gaming & desktop users prioritize clockspeed, while most server applications prioritize power-efficiency. As long as this remains the case, I wouldn't expect AMD's cores to clock the highest.

Intel can conceivably do more application-specific tuning, since they have separate designs dedicated to server use.

Like this, which I'll do for Intel:
A few things I like about Intel are:
  • ark.intel.com - AMD still has nothing to match the amount of detail, searching, and product comparison features of this site
  • Intel SSDs - Not the best performing (leaving Optane aside), but one of the few still offering end-to-end data protection (in certain models) and providing far-and-away the best documentation of any SSDs on the market.
  • Reliability - I've never personally witnessed an Intel product that failed. I'm not saying it never happens, but none of the hardware failures I've seen have ever been an Intel part.
  • Open source contributions - Intel's iGPU driver is fully opensource and merged into the mainline Linux kernel. Intel is routinely one of the most prolific contributors to the Linux kernel and projects like GCC and LLVM.
  • Open standards - even as AMD is backing away from OpenCL, it forms the foundation of Intel's new oneAPI initiative. To my knowledge Intel is the only chip vendor to have implemented SYCL, and I think they have probably the most complete OpenCL implementation (two, in fact!). To give a couple other examples, they co-founded CXL (in contrast to Nvidia's proprietary NVLink) and opened up Thunderbolt.

It used to be the case that Intel's documentation was, hands-down, the best around. I think it's been slipping, in recent years. And, once upon a time, if you wanted a bound and printed set of reference manuals shipped to you, you only had to ask.

I have a few bad things to say about Intel, but I'll just list the worst: the culture. Stack rankings, routine layoffs, and a dividends-first mindset are just some examples. An ex-Intel employee once described it to me as a kleptocracy, where management was too focused on cutting costs and milking the business for profits. Perhaps their 10 nm debacle is the highest-profile consequence of this mindset.

Oh, and I'll throw in a bonus: Intel is addicted to x86. Their mobile SoC and Xeon Phi efforts were both casualties of this mindset. It seems they're now beginning to move past it, at least with specialty applications like AI and HPC. Let's see if they can bring themselves to look at another non-x86 CPU, before it's too late.
 
Zen2 is wider than either Zen1 or Skylake... Notably, Zen2 has 11 execution ports, although it seems a little behind in decoder width.
I'd need to check in with someone on this, because I'm not sure the execution port count is actually capable of being filled on each cycle. Like Skylake has eight execution ports, but I was pretty sure it was still a 6-wide issue. So even though it has eight, it will only ever fill six in any given cycle. And AFAIK that didn't change with Sunny Cove ... but the number that can be dispatched from the reorder buffer might be 10, maybe? (https://en.wikichip.org/wiki/intel/microarchitectures/sunny_cove)

For AMD, again, I've always thought it the Zen architecture was 6-wide issue (from uop queue) -- so having 11 execution ports gives more options, but it could still only fill up to six. Things get complex and there may be edge cases where a uop counts as more than one or something. But at least wikichip (https://en.wikichip.org/wiki/amd/microarchitectures/zen_2) shows a maximum of <= 6 uops being dispatched from the front-end into the back-end.

Also, the "basically" was a very wide catch-all. Zen 2 is very power efficient compared to Intel's CPUs. Some of that is 7nm, a lot from architecture as well. It was a very big jump from Zen+ to Zen 2 in many areas.

Discussing how wide an architecture is all gets a bit theoretical as well, because I'm certain there are not many cases where Intel or AMD fill every potential execution port on every cycle for a sustained period of time. Even with SMT/Hyper-Threading and all the other enhancements. So whether AMD has 11 execution ports that can be filled each cycle, or 'only' six -- and whether Sunny Cove is 10 or six -- in practice I've heard most modern architectures only fill an average of maybe two of the available ports on any given cycle.
ark.intel.com - AMD still has nothing to match the amount of detail, searching, and product comparison features of this site
OMG yes. Ark is quite awesome -- most of the time. There are a few edge cases where it doesn't readily give me the details I want (major complaint: number of EUs in the various GPUs, as well as the GPU architecture!) AMD and Nvidia both would benefit from having something like this. Instead, mostly I end up at Wikipedia or my own spreadsheets to check on AMD specs. To be fair, a quick google of most AMD CPUs will give you a product page that lists all the specs I want ... but there's no unified whole like Ark.

Now let's see if Decidium can list some good AMD stuff. :p
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
I'd need to check in with someone on this, because I'm not sure the execution port count is actually capable of being filled on each cycle.
Let's not go too far off into the weeds on this one. The point was that Zen2 actually has substantial IPC gains over Zen1. In the table of the article I cited, I count 6 different micro-architectural parameters which improved from Zen1 to Zen2. And even if they can't actually fill all 11 ports on any given cycle, they must've had some reason for widening it that much.

But the proof of the pudding is in the eating. That's why I also cited the performance data and quoted the 17% IPC improvement on Anandtech's benchmarks. That's huge.

And, let not this point be lost: so long as AMD is continuing to share compute chiplets between desktop and EPYC, there's going to be tension between clock speed and power-efficiency. With cloud being the real prize in AMD's eyes, I think efficiency will tend to win out.

"What about the APUs?", you might ask. Well, as laptops are probably the second-most important market segment, I think they're also going to be efficiency-optimized, whenever tradeoffs must be made.

The point is, Intel might win the clock speed race, but that's not the whole story. I'm not sure AMD is actually running exactly that race.

OMG yes. Ark is quite awesome -- most of the time. There are a few edge cases where it doesn't readily give me the details I want
They slightly nerfed it, in the last re-design. I submitted feedback, and actually got a reply saying they were continuing to make improvements, which is true - it did get slightly better, but maybe not quite to its former glory.

(major complaint: number of EUs in the various GPUs, as well as the GPU architecture!)
This amazing reference will fill in that particular gap:


AMD and Nvidia both would benefit from having something like this.
Yup, similar pages for Nvidia and AMD. Big thanks to all the true GPU geeks out there...
 
  • Like
Reactions: TJ Hooker
Let's not go too far off into the weeds on this one. The point was that Zen2 actually has substantial IPC gains over Zen1. In the table of the article I cited, I count 6 different micro-architectural parameters which improved from Zen1 to Zen2. And even if they can't actually fill all 11 ports on any given cycle, they must've had some reason for widening it that much.

But the proof of the pudding is in the eating. That's why I also cited the performance data and quoted the 17% IPC improvement on Anandtech's benchmarks. That's huge.

And, let not this point be lost: so long as AMD is continuing to share compute chiplets between desktop and EPYC, there's going to be tension between clock speed and power-efficiency. With cloud being the real prize in AMD's eyes, I think efficiency will tend to win out.

"What about the APUs?", you might ask. Well, as laptops are probably the second-most important market segment, I think they're also going to be efficiency-optimized, whenever tradeoffs must be made.

The point is, Intel might win the clock speed race, but that's not the whole story. I'm not sure AMD is actually running exactly that race.

They slightly nerfed it, in the last re-design. I submitted feedback, and actually got a reply saying they were continuing to make improvements, which is true - it did get slightly better, but maybe not quite to its former glory.

This amazing reference will fill in that particular gap:



Yup, similar pages for Nvidia and AMD. Big thanks to all the true GPU geeks out there...
Agreed -- I'm not trying to say Zen 2 wasn't a major improvement over Zen+ because it absolutely was. I think a decent chunk of that comes from the larger CCX caches (16MB vs 8MB), and having a unified L3 on Zen 3 should improve things further. Of course, that has to be balanced by cache latency for the larger cache -- a massive unified L3 will inherently have a higher latency because there's more entries to search. Zen 2 L3 latency is 40ns vs. 35ns on Zen+ according to what I can find, so a unified 32MB L3 cache per 8-core CCX would probably push that to 45-50ns. Still better overall than half fast and half slow L3 when you have it split.

But all the other buffer and register size increases help as well. More reorder buffer slots, more scheduler slots, wider data paths, etc. I don't know that there were major changes to the underlying architecture (meaning, I don't think the functional pipelines are all that different from Zen/Zen+), but all those buffer, cache, data path increases combine to give a nice IPC uplift. Plus a few extra pipelines and execution ports.

And I definitely have the wiki pages for Intel's GPUs already bookmarked and open in my browser (along with AMD and Nvidia). I'm just saying it would be nice if that data was included on Ark. It's sort of dumb that it's not -- why tell us max GPU clocks but not EU/shader counts? (Answer: because Intel knows the previous integrated graphics solutions all basically suck. Even Gen11 Ice Lake is still woefully inadequate for anything more than light gaming -- but fine for video and office work.)

Weird thing for me with some of these websites is how layouts get updated and changed and often end up worse. Nvidia for example redesigned a lot of the GPU pages and now it takes more effort to find what you need. But Wikipedia mostly solves that (except when an error slips in).
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
Intel knows the previous integrated graphics solutions all basically suck. Even Gen11 Ice Lake is still woefully inadequate for anything more than light gaming -- but fine for video and office work.)
One thing that struck me about Intel GPUs is that they're the most CPU-like of the three. They've also gone the longest without fundamentally rebooting the ISA, which is probably the main reason. It's tempting to reach for the easy explanation and wonder if that's not simply because they're a CPU company, but I wonder if it's any less CPU-like than AMD or Nvidia's were, at the time the i915 first launched.

Anyway, I just wonder if Gen12 isn't pushing it too far outside its sweet spot.
 
  • Like
Reactions: JarredWaltonGPU
One thing that struck me about Intel GPUs is that they're the most CPU-like of the three. They've also gone the longest without fundamentally rebooting the ISA, which is probably the main reason. It's tempting to reach for the easy explanation and wonder if that's not simply because they're a CPU company, but I wonder if it's any less CPU-like than AMD or Nvidia's were, at the time the i915 first launched.

Anyway, I just wonder if Gen12 isn't pushing it too far outside its sweet spot.
One thing with GPUs is that there's a certain base level of functionality -- video codecs, IO ports, etc. -- that needs to be included. I'm not sure exactly how it all breaks down in terms of size, but if you look at the Ice Lake die shot (see: https://en.wikichip.org/w/images/d/d6/ice_lake_die_(quad_core)_(annotated).png ) the GPU is a pretty big chunk of hardware. 'System Agent' is equally large, though -- actually a bit larger. The CPU cores and ring bus use about 27% of the die area. The Gen11 graphics is about one third (33%) of the die space, the system agent makes up 34% of the die, and the DDR4 controller the remaining 6% of the die.

Unfortunately we don't have any hard data on recent Intel transistor counts, probably because Intel's 14nm and 10nm have been around so long that Intel's trying to hide some data from investors and analysts. Best estimate I've got is that each Ice Lake CPU core is around 300 million transistors, so that's 1.2 billion for 4-core. Given the relative size of the CPU cores in the die shot, that means between 4.0 and 5.0 billion transistors total. It also means approximately 1.5 billion transistors (plus or minus 10%) for the Gen11 graphics.

If that's accurate -- and I believe it's at least reasonably close -- the most interesting thing to me is that Nvidia's GT 1030 (GP108) has around 1.8 billion transistors. I suspect Gen11 is probably within spitting distance of GT 1030 performance -- and would be better without a 15W-25W TDP cap. So if Intel scales that up to 512 EUs instead of 64 EUs with Xe Graphics, it might not be too shabby. Plus Xe is supposed to be an ISA reboot, more or less.
 

bit_user

Polypheme
Ambassador
One thing with GPUs is that there's a certain base level of functionality -- video codecs, IO ports, etc. -- that needs to be included.
Sorry not to be more clear. I was simply talking about the cores. They're 2-way, while I think the other two are single-issue. Also, they have just 4-wide SIMD, rather than the 32-lanes the other two seem to have settled on. Next, in at least the Gen9 revision, they had the fewest SMT threads. Finally, their SIMD has cross-lane operations you don't find in the others'.

Plus Xe is supposed to be an ISA reboot, more or less.
No, it's not. They've been adding Gen12 support to their open source driver, for a long time. About the biggest change they made was to eliminate register scoreboarding (another very CPU-like feature, BTW). But, that really doesn't go too far towards making their ISA more power-efficient, which is necessary for scaling it up and still maintaining any kind of decent clock speed and power profile.
 

TRENDING THREADS