Just about all of those statements about Intel's 'achievements' going forward that purportedly demonstrate why it will dominate TSMC and AMD are speculative.
Exactly.
Just like CEO's Gelsinger's statements are filled with 'we will' and 'we plan', as opposed to 'we (actually) did.' He's an apologist, who makes constantly makes excuses for Intel's failure to meet goals on time , while promising future competence. He ought to wear a clown suit at earnings conferences. And he ought to stop talking trash about TSMC because so far, they're eating Intel's lunch at process node delivery.
First, as the public face of Intel, it's his job to try an spin the negative things so they don't seem so bad, while getting investors, customers, and partners to buy into Intel's future plans. Apart from a handful of examples (e.g. NUCs), Intel doesn't sell end-user products. Even though you can buy a retail boxed CPU or GPU, it's worthless without a software and hardware ecosystem supporting it. If their partners lose faith in Intel, then supporting those products becomes a lower priority and that will have downstream impacts.
Second, when did he actually disparage TSMC? Specifically, since they've become a TSMC customer?
Nvidia is eating their lunch with market share gains where they compete with Intel - Nvidia's revenues from computing and networking segments is now larger than their revenues from the graphics segment, and most of that revenue is at the cost of Intel (and AMD to be fair),
Eh, not sure about that. Intel doesn't make much from datacenter networking, especially if you don't count their recent Barefoot Networks acquisition. They made a play for the space with OmniPath, but the market rejected it. Hence, the acquisition.
And other than that, there's not much overlap between Intel and Nvidia, in the datacenter. Not since Intel killed Xeon Phi. Their "Datacenter GPU Xe Max", or whatever gobbledygook name they now have for Ponte Vecchio, is nowhere to be found, outside of prior HPC deployments. It appears to be DoA with rumored end-product yields deep into the non-viable territory, hence the cancellation Rialto Bridge.
Going forward, Nvidia's Grace CPUs seem squarely targeted at edging Intel out of GPU nodes in datacenter and HPC markets. However, that's still a fraction of the datacenter CPU market, and Grace is probably still in its production ramp phase.
Intel's counter-forays into areas like discrete GPUs are pretty much a joke, so far.
What's interesting is that Intel seems to have fallen into the same trap as AMD, by trying to make a GPU that's (nearly) all things to (nearly) all people. They tried to tackle:
- Traditional raster performance
- Ray tracing
- GPU compute (fp32)
- Deep learning
- Video compression
The only thing they left out was HPC, which is reserved for their "Xe Max" line. And they actually did comparatively well on most of those points. Sadly, it came at the expense of #1,
and that's the main thing it had to do well!
In fact, AMD was at least smart enough not to unduly compromise RDNA by focusing too hard on ray tracing or deep learning. The former is actually starting to become a liability, but I think it was a good call up to at least RDNA 2.
Had Intel not bitten off more than they could chew, perhaps Alchemist could've been a fair bit more successful. You can't deny that the timing of their launch couldn't have been much worse, although that's partially on them (was supposed to launch at least a year earlier).
Anyway, that's why I'm not as dismissive of their dGPU efforts as some.
In the end, it's all about execution, and Intel has yet to prove it can promise and deliver, rather than just promise.
True. And this is why I keep saying we have to wait and see whether Intel has really turned a corner, and not just seize on their impressive efforts to capitalize on their Intel 7 node as proof that they have.