News Intel CEO attacks Nvidia on AI: 'The entire industry is motivated to eliminate the CUDA market

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I'm all for expanding open-source alternatives, but are there examples (in general) of open-source solutions being better than the things they are replacing?
Speaking of GPU compute or specifically AI? The answer is complicated because it's intertwined with the matter of what hardware you're running it on.

AMD's and Intel's software stacks are open source. If you believe AMD's claim of MI300X outperforming Nvidia's H100 (which Nvidia unsurprisingly disputes) or if Intel's Gaudi 3 outperforms Nvidia, then yes.
 
  • Like
Reactions: Order 66
Speaking of GPU compute or specifically AI? The answer is complicated because it's intertwined with the matter of what hardware you're running it on.

AMD's and Intel's software stacks are open source. If you believe AMD's claim of MI300X outperforming Nvidia's H100 (which Nvidia unsurprisingly disputes) or if Intel's Gaudi 3 outperforms Nvidia, then yes.
Compute mainly, but also a bit of AI.
 
OpenVINO™ is actually not much of an open standard as it is very Intel centered, they just added Arm processors to gain RaspberryPI developpers. They need to support Amd and Nvidia processors to become Cross-platform and not just an Intel library.
Cuda in itself is not an AI inference toolkit. The true open standard concurrent to Cuda is OpenCL (implemented by the drivers of Intel, Amd, Nvidia). I suppose "Cuda" is a more popular name to attack than what is actually dubbed by Nvidia : "Nvidia AI Platform".
Came here to say this. I don't think CUDA has much to do with this. There must be a mistake along the line.

But now that we are here, what Mr Intel & AMD madam should do, is to push for CUDA to run in any GPU, or for OpelCL to really be an option (make a wraper that converts on the fly CUDA to OpenCL). It cannot be that we need to stick to one or another company because the software cannot run in other's brand GPUs.
 
  • Like
Reactions: Order 66
Mr Intel & AMD madam should do, is to push for CUDA to run in any GPU,
That's not their decision, because CUDA is owned by Nvidia. AMD pushed as close as possible, by making a work-alike they call HIP, as well as porting tools for converting CUDA code into HIP code. Then, AMD supports running HIP on Nvidia hardware, as if to show Nvidia "here's how you do it". Nvidia doesn't care.

Intel's oneAPI isn't as similar to CUDA as HIP is, but they also made a porting tool for converting CUDA code to oneAPI. I've heard about a 3rd party company making a tool to let you run oneAPI code on Nvidia GPUs, but I don't know what state that effort is currently in.

or for OpelCL to really be an option (make a wraper that converts on the fly CUDA to OpenCL).
I think I've probably run across some project like that, but it's a big job for community members and not something Intel or AMD would do.

It cannot be that we need to stick to one or another company because the software cannot run in other's brand GPUs.
I personally happen to agree, in that I view neither HIP nor oneAPI as truly open, since each is still controlled by a single company. That's why I prefer to use OpenCL or SYCL, when I have a choice.
 
From a Linux user perspective, AMD and Intel have long led the way with open standards. Nvidia, on the other hand, has essentially given the community and anything open the middle finger for many years. I'd never buy an nvidia product because it would need to be reverse engineered by someone and have drivers written for it by the community through guesswork (and likely never fully work).
 
Nvidia, on the other hand, has essentially given the community and anything open the middle finger for many years.
I'm not saying they've totally changed their stripes, but:

I don't think they "saw the light", but rather did that for pragmatic reasons:

And there are certain things, like CUDA, that I doubt they'll ever open source.

I'd never buy an nvidia product because it would need to be reverse engineered by someone and have drivers written for it by the community through guesswork (and likely never fully work).
Yeah, the old Nouveau driver was pretty rubbish. It basically worked just well enough to get a newly-installed machine booted, so you could install their proprietary drivers. It never supported their compute stack (i.e. CUDA), either.
 
He is 100% right. CUDA is one of the main reasons of Nvidia's continuing dominance in professional settings. It's the only reason I keep buying Nvidia cards, since the software I work on doesn't support hardware processing through OpenGL or anything else.

Once a viable alternative is out and gets adopted by major developers like Adobe, I'll be done with their overpriced and intentionally gimped nonsense faster than you can say "Ngreedia".
 
He is 100% right. CUDA is one of the main reasons of Nvidia's continuing dominance in professional settings.
No, Nvidia's dominance is due primarily to their superior hardware. Seriously, for all the money its big customers are paying Nvidia, I'm sure they'd gladly pitch in and help anyone else get their software up to snuff, if that were the only reason.


That's not to say software doesn't and hasn't held back AMD, but Nvidia has had a solid run on the hardware front, for quite a while. That's been a key factor in their successful CUDA strategy. Pretty much the only way you can force a proprietary standard on the industry is if you're the market leader.
 
i'm all for open standards but surprised intel is pushing it.
🤔I'm not sure what in particular this is in reference to... but when it comes to GPU drivers, both AMD and Intel are much more open when compared to Nvidia. For video encode/decode, VAAPI is open source as well and works on both AMD and Intel GPUs. The elephant in the room is of course the issue with proprietary codecs, which were always historically a pain for FLOSS packagers and Linux distros. This has to do with legal-related issues due to patents held by the MPEG patent pool group(s) over the years. Most recently the problem has resurfaced with the new MESA codec build flags being disabled in many Linux distros. Discussions of "patent trolls" and patent pool consortiums aside, Intel has always been much more embracing of Open Source in general.

Take for example the (now deprecated) Intel Edison chip & Yocto Linux distro which Intel built and supplied for use with this postage stamp sized processor for IoT devices.
 
  • Like
Reactions: -Fran- and bit_user
No, Nvidia's dominance is due primarily to their superior hardware...
To clarify, I wasn't not talking enterprise with their nebulous prices. I meant regular professionals who need a high-end card for things like content creation and so on. As for the quote above, I get what you're saying, but it kind of all depends on what you mean by superior hardware.

I mean, Nvidia's consumer flagship still uses DisplayPort 1.4a, whereas even the cheapest card in the AMD lineup is already on DisplayPort 2.1. All comparable Radeon tiers have more VRAM (which can be crucial in both gaming and content creation) and are pretty much superior in basic rasterization. So, at least as far as their mainstream cards are concerned, I think CUDA and ray tracing are the only two real advantages Nvidia has left, and rumor says that AMD's next lineup will finally include dedicated ray-tracing hardware to catch up to Nvidia's dominance in that area. That leaves CUDA, which is a real issue, since many software developers don't even bother with the current alternative and simply switch from from hardware processing to software one on the CPU.
 
I mean, Nvidia's consumer flagship still uses DisplayPort 1.4a, whereas even the cheapest card in the AMD lineup is already on DisplayPort 2.1.
Sure, you can point to such isolated details, but I'm talking about the broader performance picture.

All comparable Radeon tiers have more VRAM (which can be crucial in both gaming and content creation)
That's specific to this generation, and the reason is that Nvidia took the opportunity of beefing up their L2 cache to reduce their memory bus width. The memory capacity is tied to the bus width, because you can only have one or two GDDR6 chips per 32-bit channel (and the 2-chip versions are normally reserved for their workstation lineup).

There's also the matter of the RTX 4060 Ti, which is available with 16 GB - a rare example of a consumer GPU with 2 chips per channel.

Then, the RTX 4060 comes standard with 8 GB, matching the RX 7600.

and are pretty much superior in basic rasterization.
Not the RTX 4060 vs. RX 7600.

So, at least as far as their mainstream cards are concerned, I think CUDA and ray tracing are the only two real advantages Nvidia has left,
Their AI performance is also like 3x AMD's.

CUDA, which is a real issue, since many software developers don't even bother with the current alternative and simply switch from from hardware processing to software one on the CPU.
Source?
 
Pat Gelsinger can't keep his obnoxious pie hole shut, at the expense of INTC shareholders, who should demand his dismissal.
 
Status
Not open for further replies.