News Intel CEO says Nvidia’s AI dominance is pure luck – Nvidia VP fires back, says Intel lacked vision and execution

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
well current AI is based on LLM and machine learning algorithms, which are all based on floating point mathematics hence the need for giant triangle calculators which is, at it's core what a gpu is.

it is sort of luck. Quantum computers probably would make a more natural platform for AI, however the tech is probably 20 years too new to do it. Had quantum computing been heavily invested on earlier, it wouldn't be floating point math, but quantum computing which AI would be based upon.

In the future we may see that shift. but for now what they call AI works fine on existing GPU technology.
 
  • Like
Reactions: plusev
What certainly wasn't luck was Nvidia's pivot towards HPC, their investments in CUDA (which, if you don't know, is totally separate from what games and interactive graphics use), and Nvidia's efforts to court and support users of GPU Compute in academia.

On that latter point, Nvidia created an entire academic symposium for stimulating development of GPU Compute, way back in 2009:

None of that is "luck". Nvidia might not have guessed that AI would be the GPU Compute application which would take them to the next level, but they made sure they were positioned to benefit from any domains where GPU Compute took off.
I think people are reading an awful lot into what appears to be an off the cuff statement when talking about the future of AI for Intel. Nvidia laid an impressive groundwork for success and lucked out that it directly translated to the rise of AI compute. This is an often repeated story in the tech industry with best laid plans and all that.
This is such a load of garbage. Xeon Phi didn't get cancelled... at least not until it had a chance to prove that x86 just couldn't match Nvidia in either HPC or AI workloads. And the reason Larrabee, itself, got cancelled was that it just couldn't compete with existing GPUs.

He should blame himself for his decision to use x86 in Larrabee, and not the ISA of their i915 chipset (and later iGPU) graphics.
Larrabee absolutely deserved to be canceled, and it never made sense from the start as even if it was performant it would have required more silicon to do it. Whether they wanted to derive from integrated or start over (Gen 11/ICL was a pretty massive overhaul which is what Alchemist started from) it absolutely needed to be purpose built for the GPU space.

Xeon Phi had a lot going against it by the time it got canceled, but from what I've read I'd hazard a guess that if Intel had made a big software push ~2010 like they are now with OneAPI things may have turned out differently. I was always surprised that there wasn't a bigger emphasis on software in that era given how rapidly nvidia was successful with CUDA.
 
Well... neither did Nvidia, up until 2016's Pascal (GTX 1000 series) - which is the first time Nvidia added any instructions for accelerating neural networks (i.e. dp4a in consumer GPUs, and packed-fp16 in the GP100).

Getting back to Larrabee, Intel made a special AI-oriented version of Xeon Phi, called Knights Mill. This would be Xeon Phi's final swan song.
Yes these where the ones my Bull colleagues put into their water cooled Sequana HPC super computers when the customer picked blue instead of green for acceleration: that's what I called "the follow-up architectures".

I believe PCIe variants still sell on eBay and if I had money to waste, I'd probably go for one of those instead of V100s for playing around.

But they were and are no good for AI, designed for very high density loops on FP32 and FP64. Fluid dynamics, physics simulations etc. more their thing because going off card and the HMC scratch-pads was horrendously expensive.

No, it wasn't "all about" that, since existing games would need to be rewritten if it were. Intel seemed to understand that Larrabee would need to be competitive on raster performance. The fact that it wasn't is why they cancelled it.
The original Larrabee may not have been all about ray tracing, but it was all about graphics and perhaps oversold on ray tracing as one of the many smart things it might be able do. And since it was GPGPU it could also do physics or just emulate what the ASICs did in fixed function blocks. But it couldn't do it at anywhere near the required performance and then, as you mentioned, nobody was going to rewrite their "fake shaded triangle games" towards a ray tracing future that was "somewhere out there".

As such one could argue that by today modern GPUs might have come closer to general purpose than anything at the time, and perhaps Larrabee could have gathered fixed function blocks for the biggest accelerations. In such a theoretical evolutionary future both might have met in a similar place.

But for Larrabee there was no path lined with viable products along every step of that evolution, only a "vision" a long way off, while Nvidia did and eventually planned for that with consumer products on nearly every step to enable the economy of scale.

Trying to save that Larrabee effort by turning it into an HPC architecture would have been a stroke of genius, if again Intel had created a set of viable products along that path. But without any consumer benefit and as HPC-only product, there simply wasn't enough of a demand to make that viable, even if they had created a perfect architecture with a full software eco-system to support.

And without the latter two, there was even less of a chance for success. PG is at best delusionary to label that "luck" or lack thereoff.

In retrospect it's quite indredible that Intel as the master of 8080/8 evolution insisted on creating genius step-change designs, that were not planned along a 10 or more year roadmap. It's something that the DEC Alpha team understood with their VAX background and 360 as the original visionary design, yet even with all these examples Intel was to engrossed in their genius to accept and live that.

Raja Koduri and Jim Keller surely tried to bring that culture to Intel, if they manage with their GPUs, only time will tell.

If Intel believes what PG said here, I have little hope it worked.
 
Yes, everything needs a bit of luck, hard work alone is not enough. Bill Gate is also lucky to become a billionaire, without IBM's mistake, Microsoft would not succeed.
Would you want to elaborate on IBM's mistake?

I mean there were a lot of them, but I wouldn't consider going with Microsoft's CDOS instead of a proper Digital Research CP/M was a mistake.

Sticking with OS/2 would have been a mistake, even if that was a pure Microsoft design, originally.

There I'd say that Microsoft's decision to hire Dave Cutler and create a VMS clone with WNT was one of their better decisions.

If IBM hadn't utterly failed while trying to regain exclusive control over the PC market via the PS/2, perhaps Ataris or Amigas would rule the planet.
 
  • Like
Reactions: bit_user
well current AI is based on LLM and machine learning algorithms, which are all based on floating point mathematics hence the need for giant triangle calculators which is, at it's core what a gpu is.
Please update your bookmarks. LLMs are all about computing on weights, which these days are represented in as few bits as possible, because at least in inference the number of weights can be more important than their precision.

It's become more complicated than triangles, I'm afraid.
it is sort of luck. Quantum computers probably would make a more natural platform for AI, however the tech is probably 20 years too new to do it. Had quantum computing been heavily invested on earlier, it wouldn't be floating point math, but quantum computing which AI would be based upon.

In the future we may see that shift. but for now what they call AI works fine on existing GPU technology.
Define quantum, reliably and with repeatable precision, even if low: see the problem (or the cat)?
 
  • Like
Reactions: bit_user
Nvidia deserves his dominance.
It has been pushing GPGPU for decades, his programming tools have no matching, and the information available on Internet beats anything else.

Modern Ai only exists because nvidia pushed GPUs for general computing. This market was built by nvidia, and AMD/Intel do not give half the support of what nvidia gives.
 
I think he might just be blowing smoke for the sake of investors. He needs to give them an excuse for how Nvidia managed to get so far out ahead of Intel.


That's what they claim Gaudi 3 will do.
Pat is stirring the pot and tech sites are eating it up.

I'm not sure I like how he is doing it, but the guy sure is getting his face on a lot of articles and videos lately.
 
  • Like
Reactions: bit_user
well current AI is based on LLM
A large part of the field is focused on other techniques & technologies. Even popular image generators don't use LLMs. The large language models hold a lot of popular imagination because they're easy to play with and consume the most data and resources to train.

it is sort of luck. Quantum computers probably would make a more natural platform for AI,
Not really. Quantum computers would be good at optimizing neural networks to achieve the best accuracy with the minimum number of weights. However, when it comes to inferencing, they couldn't hold a candle to conventional computers.

Had quantum computing been heavily invested on earlier,
I don't think it could've come much sooner. I think we probably needed the modern set of scientific and engineering tools, in order to crack that particular nut. Even with those in hand, look how long it's taking!

but for now what they call AI works fine on existing GPU technology.
GPUs aren't quite optimal. They're setup for lots of data movement, which is energy-intensive and high-latency. You can hide the latency, but at the cost of lots of silicon. So, deep learning architectures with more embedded memory have an inherent efficiency advantage.

You might note a big uptick in the amount of SRAM and L2 cache Nvidia has been including, recently. That's no coincidence, IMO.
 
But they were and are no good for AI, designed for very high density loops on FP32 and FP64. Fluid dynamics, physics simulations etc. more their thing because going off card and the HMC scratch-pads was horrendously expensive.
Because, unlike real GPUs, Xeon Phi had only 4-way SMT. That's not going to hide latency very well, if more than half your threads (or all of your threads, more than half the time) are hitting memory.

perhaps Larrabee could have gathered fixed function blocks for the biggest accelerations. In such a theoretical evolutionary future both might have met in a similar place.
Larrabee might've been a little better with hardware ROPs. Tessellation wasn't really a thing, back then. There's not a whole lot else you could put into hardware. And, ultimately, it doesn't get around the fact that x86 just isn't a good ISA for a GPU.

Trying to save that Larrabee effort by turning it into an HPC architecture would have been a stroke of genius,
That's exactly what Xeon Phi was. During the design of Larrabee, they always had the plan to attack both the consumer and HPC/datacenter markets. The only thing that changed was dropping out of the consumer market.

You might have a point that the HPC/datacenter market lacked the volume to sustain Phi, but I think maybe it could've, if they hadn't chosen the wrong architecture at the outset. You can't break into a new market and dominate it with an inferior product, even if your name is Intel.
 
Modern Ai only exists because nvidia pushed GPUs for general computing. This market was built by nvidia,
I partially agree. People were doing GPGPU before CUDA came onto the scene. There were other languages and frameworks for doing it, or you could just write Direct3D or GLSL shaders, sort of like what people are now sometimes doing with Vulkan.

Yes, none of this is as nice as CUDA, but then OpenCL came along in 2007 and it wasn't much of a step down from CUDA at the time. Had CUDA not been there to take all the oxygen away from OpenCL, it would've developed much better & faster, I think.

I think Nvidia's efforts probably caused the AI revolution to happen about 2-3 years sooner than it otherwise would've. That's as much credit as I'll give them.

AMD/Intel do not give half the support of what nvidia gives.
Intel is actually pretty good on GPU Compute software. Their weakness has long been on the hardware front.
 
Jensen has invested in many ai startups and they will by default use nvidia gpus and Cuda. Thats where Intel missed out.
Even before that, he was donating Nvidia GPUs to universities and giving them away as prizes to people who presented research papers at the annual GTC events I mentioned above. Very much cultivating the ecosystem.

Meanwhile, getting AMD to support GPU Compute on consumer hardware has been like pulling teeth with a pair of tweezers.

Intel has done a better job on software (than AMD), but was just super late with decent hardware. You could use their GPU Compute stack on iGPUs, however. That stuff has been in pretty decent shape since Broadwell.
 
  • Like
Reactions: plusev and gg83
So what Intel's excuse of losing the cpu gaming crown with double the negative press exposure with their rebranding the i9 13900K to i9 14900k. AMD is lucky. 😂 bro!
 
Nice read and roughly what I recalled when posting. It wasn't a great starting point and yes follow up silicon did not do well and mostly didn't make it to the wild as your sourced article stated. But it was a starting point they should not have given up on. A buddy had one (i740) and was thrilled at first but his love faded quickly for many of the reasons mentioned in your source. But I know he would of given them another chance if the oppertunity had arisen. Many would have I believe just because Intel's brand name was (is) so recognizable.

Nvidia's earliest stuff pretty much sucked too. Remember the quad based rendering fail that was in their debut. Point being Intel's "failure" was not all that different in that it wasn't ready/right for the industry when they took their swing. Difference is Nvidia stayed at bat where Intel called it strike out after 1 and 9/10ths of a swing. Pat's so wrong it's almost to sad to read.
In fairness the GPU market has changed alot since the year 2000. The ATI Rage Max, which arguably was the hottest card available was full MSRP for $299 and on deal you could find it at ~250 at release. Most builders I knew thought that was laughable and would never spend over $100 for a graphics card. That is in contrast with Intel's higher end Pentium 4 SKUs that as memory serves were in the $700-900+ range for reference and most of us were spending $100 on a good motherboard and $250+ on CPU for it. PCIe was several years from release and graphics cards were a mix of everything from PCI (not to be mistaken with PCIe), AGP(I think there were 5 or six variants) and ISA(vesa) busses. "Plug and Pray" had relatively stabilized by 2000 but many of these cards still had jumpers to set IRQs and we were not that far from configuring config.sys and autoexe.bat to physically configure and load video and audio drivers. Things consolidated pretty quickly but stretching memory you still had 3dfx, Nvidia, ATI, Matrox, S3, AMD and Intel (briefly) as players in the GPU space (forgive me if I missed one or two because it was crowded.) Directx, OpenGL API and driver support was a nightmare. While generational gains were massive, good, stable, driver support for bleeding edge cards didn't exist. Manufactures were throwing out proprietary extensions constantly and developers had to integrate special settings to support them on a case per case basis on their own proprietary engines. Jump forward to today where there are basically two discrete gpu vendors, cards routinely represent more of the build cost than all of the other components combined and crazy sales in the crypto and supercomputer arena I am sure Intel does wish they had developed a solid discrete GPU line - in hindsight. I cannot in full honesty disagree with their decision at that time though. Intel has historically been too risk averse to ever throw all in on anything. Everyone in the GPU market had pretty much all chips on the table all of the time. Single generations made and broke whole companies. The GPU arena has always been a winner takes most game and competition was brutal, leaving two in the post acquisition era. Honestly it is probably easier for Intel to get back in now than it was back then, and I don't know how probable that is although it seems that they have at least spent some money to try. I don't really know though if Intel can lose there though as carryover to the iGPU to combat AMD APU is absolutely necessary for their bread and butter even if they don't become discrete card dominant. If you factor in the integrated GPU market Intel is still one of the biggest graphics processor manufactures in the world today by volume sold.
 
I'm no fan of nVidia but I wouldn't say that they're lucky. Over the years, they have consistently produced fantastic products. Sure, they were overpriced, but they were almost always good. Now, a lot of people in the market only buy nVidia but that's because of the reputation that nVidia earned over the years, it's not like it fell into their laps.

As for Intel, if Larrabee was so great, Intel wouldn't have cancelled it. If it was great and Intel cancelled it anyway, well, that's not luck, that's abject stupidity.

If anyone's "lucky", it's Intel for still somehow being relevant with CPUs that draw as much power as many GPUs when their opponent is putting out CPUs with incredible power efficiency. This guy is so out of touch with reality that it makes me wonder what criteria Intel has for choosing their CEOs.
 
Last edited:
In fairness the GPU market has changed alot since the year 2000.
Oh absolutely...but for me this doesn't change my mind that Intel should have continued to develop new discrete graphics. As you listed there are a host of issues Intel cards had. That said they easily could have used those mistakes as lessons in what not to do. I am not saying they should have bet the farm either as getting things right in the gpu space is tough, especially with all the patents involved you have to dodge or license conversely.

At the end of the day Intel only has themselves to blame for where they stand in the market. No matter how you slice it, pushing the AI boom off as luck for Nvidia is just Intel coming off as a disingenuous corporate toddler.
 
In fairness the GPU market has changed alot since the year 2000. ... Most builders I knew thought that was laughable and would never spend over $100 for a graphics card.
They definitely weren't gamers, then. I don't remember exactly, but I know $100 wasn't a lot for a 3D accelerated graphics card, back then. Sure, you could get some S3 or Trident cards for less, but you wouldn't if you cared at all about 3D performance.

That is in contrast with Intel's higher end Pentium 4 SKUs that as memory serves were in the $700-900+ range for reference
If you're trying to stick to a rigid timeline, the Pentium 4 only launched at the end of 2000.

graphics cards were a mix of everything from PCI (not to be mistaken with PCIe), AGP(I think there were 5 or six variants) and ISA(vesa) busses.
No, maybe you could still buy ISA graphics cards, but they would've been old models. Maybe you're thinking of VLB? Even then, from what I'm reading ATI didn't release one after 1994. Starting in 1995, they were all-in on PCI. AGP came after that, with there being some considerable overlap.

IIRC, AGP already had a 2x version, by the time it launched. Then came 4x and 8x.

Jump forward to today where there are basically two discrete gpu vendors, cards routinely represent more of the build cost than all of the other components combined
No... definitely not "routinely". You basically have to get a RTX 4090 or else put a RTX 4080 or RX 7900 XTX in a budget build for that to be true. None of those cards are "routine".
 
Last edited:
  • Like
Reactions: thestryker
Even before that, he was donating Nvidia GPUs to universities and giving them away as prizes to people who presented research papers at the annual GTC events I mentioned above. Very much cultivating the ecosystem.

Meanwhile, getting AMD to support GPU Compute on consumer hardware has been like pulling teeth with a pair of tweezers.

Intel has done a better job on software (than AMD), but was just super late with decent hardware. You could use their GPU Compute stack on iGPUs, however. That stuff has been in pretty decent shape since Broadwell.
Smart business. That's how you get to near 70% profit margin. It seems like amd really doesn't want to sell too many gpus, right? For a while it seemed like they didn't want to sell too much of anything though. I appreciate your thorough comment too.
 
They definitely weren't gamers, then. I don't remember exactly, but I know $100 wasn't a lot for a 3D accelerated graphics card, back then. Sure, you could get some S3 or Trident cards for less, but you wouldn't if you cared at all about 3D performance.
You're absolutely right so let me help you out here:
end of 2000: Geforce 2 PRO: $329 and Geforce 2 Ultra: $499
middle of 2000: Geforce 2 GTS: $349 (price dropped significantly post launch)
end of 1999: Geforce SDR: ~$199 and Geforce DDR: ~$279 (they had problems selling the SDR due to the TNT 2 Ultra which went on to impact the DDR version)
 
  • Like
Reactions: bit_user
It seems like amd really doesn't want to sell too many gpus, right? For a while it seemed like they didn't want to sell too much of anything though.
We're going to have to disagree, there. AMD certainly had budgetary problems and perhaps some conflicting ideas about GPU Compute that hurt them on that front, but they've been doggedly pursuing the gaming market for the longest time.

Their main problem, on the gaming front, is just that they have one of the fiercest competitors. I've found it hard to watch some of Nvidia's tactics, but the upside is that it's gotten AMD into a very strong position that's going to be difficult for just about anyone else to catch.
 
We're going to have to disagree, there. AMD certainly had budgetary problems and perhaps some conflicting ideas about GPU Compute that hurt them on that front, but they've been doggedly pursuing the gaming market for the longest time.

Their main problem, on the gaming front, is just that they have one of the fiercest competitors. I've found it hard to watch some of Nvidia's tactics, but the upside is that it's gotten AMD into a very strong position that's going to be difficult for just about anyone else to catch.
I for sure was seeing it from a limited point of view, maybe even ignorant to AMDs position. Nvidia has been very effective with the hype train.
 
Status
Not open for further replies.