News TSMC founder says Intel should focus on AI, not advanced process technologies

bit_user

Titan
Ambassador
BTW, it's not as if Intel has been oblivious to AI. Their Xeon Phi product line was aimed at the same sorts of HPC workloads as Nvidia had targeted. Intel started pivoting more towards AI in 2017, with Knights Mill (the last Xeon Phi models they produced), not long after Nvidia's P100 had started making waves.

Going as far back as Broadwell or Skylake, their iGPUs also had packed fp16 dot product, which was a sort of weird thing to put in there, unless you were anticipating AI and convolutional neural networks. They did this even before Nvidia put it in the P100.

Then, in 2016, Intel bought Movidius. They also bought Nervana Systems, that year, which was their server-based AI strategy (with Movidius being decided more client/edge-focused). Then, in 2020, they upgraded their server AI stack with the acquisition of Habana Labs. They hedged this bet by building AI horsepower into Ponte Vecchio (marketed as the Xe Datacenter Max).

So, I think you can fault Intel for execution, but it's not like they were blind-sided or haven't been working on AI for long enough. Perhaps what they lacked was more focus and a better appreciation of how much harder it is to compete with Nvidia than practically anyone else they had faced.
 
Dec 11, 2024
1
1
10
He is trying to say that Intel did right going for 18A. And while ex-CEO is a great guy, Intel may need an outsider to have less arrogance for competition.
 
  • Like
Reactions: artk2219

King_V

Illustrious
Ambassador
What if it's more like the Internet? Sure, we had a dot-com bubble, and it did burst, but the Internet is still with us.
I read it more like "AI" currently being a buzzword as opposed to actual useful artificial intelligence. Sort of like "dot com" was the buzzword rather than the Internet. Reminds me a little of the time not long ago when you couldn't go 24 hours without hearing "Big Data" multiple times.

Hell, we've all seen it. "AI" being the label slapped on simple things like keyword parsing, which has been around forever.
 

bit_user

Titan
Ambassador
Reminds me a little of the time not long ago when you couldn't go 24 hours without hearing "Big Data" multiple times.

Hell, we've all seen it. "AI" being the label slapped on simple things like keyword parsing, which has been around forever.
Well, modern AI is partly an evolution of Big Data. Without massive troves of data, you can't train very sophisticated models, which was a big driver behind the acquisition of so much data. And I'm not just talking about LLMs, which seem to garner most of the attention and focus, but a lot of the other things people are doing with AI.
 
  • Like
Reactions: artk2219

usertests

Distinguished
Mar 8, 2013
967
856
19,760
Sure 200k per 2nm wafer
Actual cost is supposedly over $30k, less than $40k, or 1/5th that.

Even if it was $200k, that's maybe $300 for a Ryzen/Epyc CCD, or $2500 for an AD102-sized die. So if we eventually see that price for some super advanced monolithic 3D node, it will be worth it for some customers. Wafer Scale Engines already sell for $millions using the whole wafer.
 
  • Like
Reactions: artk2219

Pierce2623

Commendable
Dec 3, 2023
501
385
1,260
BTW, it's not as if Intel has been oblivious to AI. Their Xeon Phi product line was aimed at the same sorts of HPC workloads as Nvidia had targeted. Intel started pivoting more towards AI in 2017, with Knights Mill (the last Xeon Phi models they produced), not long after Nvidia's P100 had started making waves.

Going as far back as Broadwell or Skylake, their iGPUs also had packed fp16 dot product, which was a sort of weird thing to put in there, unless you were anticipating AI and convolutional neural networks. They did this even before Nvidia put it in the P100.

Then, in 2016, Intel bought Movidius. They also bought Nervana Systems, that year, which was their server-based AI strategy (with Movidius being decided more client/edge-focused). Then, in 2020, they upgraded their server AI stack with the acquisition of Habana Labs. They hedged this bet by building AI horsepower into Ponte Vecchio (marketed as the Xe Datacenter Max).

So, I think you can fault Intel for execution, but it's not like they were blind-sided or haven't been working on AI for long enough. Perhaps what they lacked was more focus and a better appreciation of how much harder it is to compete with Nvidia than practically anyone else they had faced.
I agree. Intel definitely wasn’t late to the AI party. Their design abilities are just significantly less advantageous than they were before.
 
  • Like
Reactions: artk2219

abufrejoval

Reputable
Jun 19, 2020
612
450
5,260
This guy is so arrogant and condescending. TSMC will have issues someday and I will be here for it.
If statements like this were a habit of his, I'd agree.

Here it sound a bit more like part of a farewell address, but with more to show than Mr. Gelsinger and fewer of such public statements in the past.

At that point at worst I'd call it more pride than arrogance, and while any advice from a mountain of success doesn't guarantee equal succes for anyone that follows it, anyone else quite literally hasn't ascended to that same level and has too look up to him looking down, even with a guy that isn't very tall.
 
Last edited:

abufrejoval

Reputable
Jun 19, 2020
612
450
5,260
BTW, it's not as if Intel has been oblivious to AI. Their Xeon Phi product line was aimed at the same sorts of HPC workloads as Nvidia had targeted. Intel started pivoting more towards AI in 2017, with Knights Mill (the last Xeon Phi models they produced), not long after Nvidia's P100 had started making waves.

Going as far back as Broadwell or Skylake, their iGPUs also had packed fp16 dot product, which was a sort of weird thing to put in there, unless you were anticipating AI and convolutional neural networks. They did this even before Nvidia put it in the P100.

Then, in 2016, Intel bought Movidius. They also bought Nervana Systems, that year, which was their server-based AI strategy (with Movidius being decided more client/edge-focused). Then, in 2020, they upgraded their server AI stack with the acquisition of Habana Labs. They hedged this bet by building AI horsepower into Ponte Vecchio (marketed as the Xe Datacenter Max).

So, I think you can fault Intel for execution, but it's not like they were blind-sided or haven't been working on AI for long enough. Perhaps what they lacked was more focus and a better appreciation of how much harder it is to compete with Nvidia than practically anyone else they had faced.
And then I'd even add the neuromorphic hardware, the Loihi chips, where Intel tried to jump far beyond NN simulations on GPGPU.

Intel often has been a mix, of jumping too far ahead and not far enough.

Just think iAPX 80432, iAPX 850/860, Itanium and I don't know how many others.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
I agree. Intel definitely wasn’t late to the AI party. Their design abilities are just significantly less advantageous than they were before.
Their first mistake was going with x86, for Larrabee. They very nearly made the right decision of using their GMA IP for Larrabee, but the x86 team won that battle and lost them the war, ultimately dooming both Larrabee and Xeon Phi.

The Movidius team is designing the NPUs found in Meteor Lake, Lunar Lake, and their predecessors (GNA and GNA2, I think). So, that arguably worked out reasonably well.

I honestly don't know enough about the Nervana IP to say why Intel decided Habana was a better bet, but I assume they probably had good reasons.

Ponte Vecchio seems like it had the potential, but somebody went absolutely bonkers with tiles of all different process nodes and stacking them every which way! If they'd kept it far simpler, it could've shipped on time (or much closer, at least) and still would've offered compelling performance.

Habana is another case where I just don't know enough to say why they fell short. I wonder if Intel maybe erred in meddling too much with them, post-acquisition, or maybe erred in giving them too little support? Though, I'm also don't want to presume their shortcomings weren't due to other factors, entirely.