Exactly.
First, as the public face of Intel, it's his job to try an spin the negative things so they don't seem so bad, while getting investors, customers, and partners to buy into Intel's future plans. Apart from a handful of examples (e.g. NUCs), Intel doesn't sell end-user products. Even though you can buy a retail boxed CPU or GPU, it's worthless without a software and hardware ecosystem supporting it. If their partners lose faith in Intel, then supporting those products becomes a lower priority and that will have downstream impacts.
Second, when did he actually disparage TSMC? Specifically, since they've become a TSMC customer?
Eh, not sure about that. Intel doesn't make much from datacenter networking, especially if you don't count their recent Barefoot Networks acquisition. They made a play for the space with OmniPath, but the market rejected it. Hence, the acquisition.
And other than that, there's not much overlap between Intel and Nvidia, in the datacenter. Not since Intel killed Xeon Phi. Their "Datacenter GPU Xe Max", or whatever gobbledygook name they now have for Ponte Vecchio, is nowhere to be found, outside of prior HPC deployments. It appears to be DoA with rumored end-product yields deep into the non-viable territory, hence the cancellation Rialto Bridge.
Going forward, Nvidia's Grace CPUs seem squarely targeted at edging Intel out of GPU nodes in datacenter and HPC markets. However, that's still a fraction of the datacenter CPU market, and Grace is probably still in its production ramp phase.
What's interesting is that Intel seems to have fallen into the same trap as AMD, by trying to make a GPU that's (nearly) all things to (nearly) all people. They tried to tackle:
- Traditional raster performance
- Ray tracing
- GPU compute (fp32)
- Deep learning
- Video compression
The only thing they left out was HPC, which is reserved for their "Xe Max" line. And they actually did comparatively well on most of those points. Sadly, it came at the expense of #1,
and that's the main thing it had to do well!
In fact, AMD was at least smart enough not to unduly compromise RDNA by focusing too hard on ray tracing or deep learning. The former is actually starting to become a liability, but I think it was a good call up to at least RDNA 2.
Had Intel not bitten off more than they could chew, perhaps Alchemist could've been a fair bit more successful. You can't deny that the timing of their launch couldn't have been much worse, although that's partially on them (was supposed to launch at least a year earlier).
Anyway, that's why I'm not as dismissive of their dGPU efforts as some.
True. And this is why I keep saying we have to wait and see whether Intel has really turned a corner, and not just seize on their impressive efforts to capitalize on their Intel 7 node as proof that they have.