Well... neither did Nvidia, up until 2016's Pascal (GTX 1000 series) - which is the first time Nvidia added any instructions for accelerating neural networks (i.e. dp4a in consumer GPUs, and packed-fp16 in the GP100).
Getting back to Larrabee, Intel made a special AI-oriented version of Xeon Phi, called Knights Mill. This would be Xeon Phi's final swan song.
Yes these where the ones my Bull colleagues put into their water cooled Sequana HPC super computers when the customer picked blue instead of green for acceleration: that's what I called "the follow-up architectures".
I believe PCIe variants still sell on eBay and if I had money to waste, I'd probably go for one of those instead of V100s for playing around.
But they were and are no good for AI, designed for very high density loops on FP32 and FP64. Fluid dynamics, physics simulations etc. more their thing because going off card and the HMC scratch-pads was horrendously expensive.
No, it wasn't "all about" that, since existing games would need to be rewritten if it were. Intel seemed to understand that Larrabee would need to be competitive on raster performance. The fact that it wasn't is why they cancelled it.
The original Larrabee may not have been all about ray tracing, but it was all about graphics and perhaps oversold on ray tracing as one of the many smart things it might be able do. And since it was GPGPU it could also do physics or just emulate what the ASICs did in fixed function blocks. But it couldn't do it at anywhere near the required performance and then, as you mentioned, nobody was going to rewrite their "fake shaded triangle games" towards a ray tracing future that was "somewhere out there".
As such one could argue that by today modern GPUs might have come closer to general purpose than anything at the time, and perhaps Larrabee could have gathered fixed function blocks for the biggest accelerations. In such a theoretical evolutionary future both might have met in a similar place.
But for Larrabee there was no path lined with viable products along every step of that evolution, only a "vision" a long way off, while Nvidia did and eventually planned for that with consumer products on nearly every step to enable the economy of scale.
Trying to save that Larrabee effort by turning it into an HPC architecture would have been a stroke of genius, if again Intel had created a set of viable products along that path. But without any consumer benefit and as HPC-only product, there simply wasn't enough of a demand to make that viable, even
if they had created a perfect architecture with a full software eco-system to support.
And without the latter two, there was even less of a chance for success. PG is at best delusionary to label that "luck" or lack thereoff.
In retrospect it's quite indredible that Intel as the master of 8080/8 evolution insisted on creating genius step-change designs, that were not planned along a 10 or more year roadmap. It's something that the DEC Alpha team understood with their VAX background and 360 as the original visionary design, yet even with all these examples Intel was to engrossed in their genius to accept and live that.
Raja Koduri and Jim Keller surely tried to bring that culture to Intel, if they manage with their GPUs, only time will tell.
If Intel believes what PG said here, I have little hope it worked.