News Intel CEO attacks Nvidia on AI: 'The entire industry is motivated to eliminate the CUDA market

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

abufrejoval

Reputable
Jun 19, 2020
337
236
5,060
Sweet, the x86 guy complaining about how difficult the market leader makes it to get out of his backward compatibility clutches!

Well Intel was forced to license by IBM, who as the creator of the 360 archicture knew best how to maintain mainframe dependency for eons, right Gene?

Perhaps it's time to open-source x86? That could keep RISC-V at ARM's length, although Intel seems to have found a VIA to get very Chinese-cheap...

I find myself rather amused now, but with growing concern as to what these AI giants will do in their death throws, once they find that ever fewer consumers will be able to buy whatever services AI might actually provide, if they can't earn money and pay taxes. With hundreds of billions invested into AI hardware that definitely won't run SAP nor FarCry demanding even bigger returns within a few years, moral cautions or altruistic factions stand little chance.
 

bolweval

Distinguished
Jun 20, 2009
165
130
18,760
Sweet, the x86 guy complaining about how difficult the market leader makes it to get out of his backward compatibility clutches!

Well Intel was forced to license by IBM, who as the creator of the 360 archicture knew best how to maintain mainframe dependency for eons, right Gene?

Perhaps it's time to open-source x86? That could keep RISC-V at ARM's length, although Intel seems to have found a VIA to get very Chinese-cheap...

I find myself rather amused now, but with growing concern as to what these AI giants will do in their death throws, once they find that ever fewer consumers will be able to buy whatever services AI might actually provide, if they can't earn money and pay taxes. With hundreds of billions invested into AI hardware that definitely won't run SAP nor FarCry demanding even bigger returns within a few years, moral cautions or altruistic factions stand little chance.
Very true, and clever.. (y)
 

Nuadormrac

Distinguished
Aug 12, 2012
2
0
18,510
It's odd that he's supporting an open standard when Intel also tend to be very proprietary, but it also makes sense to support it and pull the rug out some from NVIDIA somehow.

In this case, I do hope he and the open-source project supporters succeed in the same way AMD was able to force some shift towards more open standards.
The way I'm seeing it, especially while giving reference to core ultra, is they're looking to unload it back to a CPU. But unless they're trying to do something like Google is with their Tensor chips, good luck with that. People are using GPUs instead of CPUs for a decided reason, it's faster. If you look at a project like GPUgrid over in BOINC, which was basically Berkeley's program that evolved out of SETI@home that was opened up to other researchers to put their own projects on, the difference between CUDA and CPU crunching is obvious in time to completion for work done. If by open, Intel means generalized processing, Nvidia doesn't have to worry about that as long as special purpose hardware can give a decided benchmark advantage.
 

bit_user

Polypheme
Ambassador
Surreal seeing Intel (of all companies) pushing for "more open standards".
Well... I didn't watch the original speech, but the article mentions OpenVINO, which is open source, but very much an Intel-developed, Intel-centric framework. Using any other hardware via OpenVINO might be possible, but it will be a second-class citizen and probably won't work as well as using its native API.

Still, I can see the reasoning.
Second place (and below) push for open standards, as a way to undermine the proprietary ones being used by the market leader. AMD did this, for a long time, before making a CUDA clone they call HIP. That's technically open source, but a similar situation as Intel's OpenVINO, being AMD-developed.

However, as long as Intel still provides first class support for OpenCL, I still give them credit for supporting open standards. AMD's OpenCL support has been flagging, for a long time.

Also, Intel supports these compute frameworks across they entire spectrum of devices, whereas AMD is still way behind on supporting them for consumer GPUs and APUs. Plus, being rather quick to drop support for older hardware they previously supported.
 

bit_user

Polypheme
Ambassador
OpenVINO™ is actually not much of an open standard as it is very Intel centered, they just added Arm processors to gain RaspberryPI developpers. They need to support Amd and Nvidia processors to become Cross-platform and not just an Intel library.
They actually do have support for Nvidia GPUs, though it might still be beta/experimental.

However, if I were primarily using Nvidia hardware, I'd be using CUDA/CUDNN/TensorRT - not OpenVINO!

Cuda in itself is not an AI inference toolkit.
...
I suppose "Cuda" is a more popular name to attack than what is actually dubbed by Nvidia : "Nvidia AI Platform".
Yes, he's just using it as a blanket way of referring to their constellation of technologies and frameworks. Yes, what you'd actually use is CUDNN and Nvidia's equivalent to OpenVINO is called TensortRT.
 

bit_user

Polypheme
Ambassador
Pat sounds desperate.
Nah, but I'll bet he is under lots of pressure from investors. The stuff he's talking about has been going on at Intel for several years. It's entirely fair for him to highlight it, to help investors understand Intel's AI strategy and efforts.

I saw a recent presentation mapping available open source tools against CUDA and there are still many blocks missing and most codes were contributed by AMD. Perhaps Intel should put money where its mouth is.
Intel has a full stack that, for many purposes, is comparable to CUDA (and doesn't depend on anything developed by AMD, either).

I'm not going to say Intel's oneAPI stack can replace CUDA for everything, but certainly most AI-related tasks. Also, with Intel's HPC efforts, probably most of those applications, also.
 
Well... I didn't watch the original speech, but the article mentions OpenVINO, which is open source, but very much an Intel-developed, Intel-centric framework. Using any other hardware via OpenVINO might be possible, but it will be a second-class citizen and probably won't work as well as using its native API.


Second place (and below) push for open standards, as a way to undermine the proprietary ones being used by the market leader. AMD did this, for a long time, before making a CUDA clone they call HIP. That's technically open source, but a similar situation as Intel's OpenVINO, being AMD-developed.

However, as long as Intel still provides first class support for OpenCL, I still give them credit for supporting open standards. AMD's OpenCL support has been flagging, for a long time.

Also, Intel supports these compute frameworks across they entire spectrum of devices, whereas AMD is still way behind on supporting them for consumer GPUs and APUs. Plus, being rather quick to drop support for older hardware they previously supported.
Isn't OpenCL itself basically flagging? I know I have talked with at least a couple of people at companies where they cancelled OpenCL support after previously having it because it wasn't working as well as they wanted.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Let me cite three great examples of this:
  1. Frontier supercomputer
  2. El Capitan supercomputer
  3. Aurora supercomputer
I think the DoE just likes to spread the wealth around and avoid backing a single horse for both political and practical reasons (i.e. if you pick a losing horse, you end up with a supercomputer built on dead-end platform & technologies).

Of course, Nvidia would point to the Grace Hopper superchips as being unavailable back when these were commissioned.
In the generation before, they had some machines based on IBM/Nvidia.

ROCm, HIP, OpenVINO, etc. is required to be open.
HIP, CUDA, and oneAPI form the foundations of their respective compute APIs. OpenVINO is a higher layer specific to AI/DeepLearning.

Fool me (the US gov't) once, shame on you. Fool me twice, shame on me. The major players really do want open standards to prevail, because it means they can go after the best hardware with a standardized (more or less) software ecosystem, rather than having to work on porting everything from CUDA to ROCm or OneAPI or OpenVINO or whatever.
HIP and oneAPI are still single-vendor efforts, even if they're open source. OpenCL/SYCL is the only true open standard, though Intel uses it as the foundation of oneAPI - AMD and Nvidia do not!
 

bit_user

Polypheme
Ambassador
Perhaps it's time to open-source x86? That could keep RISC-V at ARM's length, although Intel seems to have found a VIA to get very Chinese-cheap...
No, like OpenPOWER did, it would come too late.

With hundreds of billions invested into AI hardware that definitely won't run SAP nor FarCry demanding even bigger returns within a few years, moral cautions or altruistic factions stand little chance.
You raise a good point about the $Billions being spent on accelerators that will just become e-waste, as future AI models exceed their capacity and render them obsolete. Most of these aren't even built as PCIe cards you could put in a standard PC.

However, some do still use PCIe form factor. If anyone wants a deal on some previous-generate GPU compute hardware, checkout Nvidia V100's on ebay. At 7 TFLOPS of fp64, it's still way more than you can get on consumer GPUs (if you actually need that sort of thing).
 
I think the DoE just likes to spread the wealth around and avoid backing a single horse for both political and practical reasons (i.e. if you pick a losing horse, you end up with a supercomputer built on dead-end platform & technologies). In the generation before, they had some machines based on IBM/Nvidia.

HIP, CUDA, and oneAPI form the foundations of their respective compute APIs. OpenVINO is a higher layer specific to AI/DeepLearning.

HIP and oneAPI are still single-vendor efforts, even if they're open source. OpenCL/SYCL is the only true open standard, though Intel uses it as the foundation of oneAPI - AMD and Nvidia do not!
That's my point, though: DoE had working CUDA stuff. Nvidia has faster, working CUDA stuff that could go into new supercomputers. Why did the DoE choose to go non-Nvidia not just once, or twice, but three times? It speaks to some underlying problems that it had with Nvidia's approach — not that it couldn't continue to work with Nvidia hardware, but that it would rather hit the reset switch and shift to other companies for a change.

And yes, part of that is spreading the money and research around. But part of it is also because it doesn't like vendor lock-in and proprietary standards. I think CUDA was a learning phase and now it wants something a bit less limited to one set of hardware. Again, I've heard (unofficially) that a lot of ROCm/HIP development is being spurred by the DoE, to bring it up to the level of CUDA/etc. Nvidia tech and beyond. DoE bootstrapped CUDA and is now doing the same for the competition.

Fundamentally, the market leaders always seem to want to move toward proprietary standards and vendor lock-in, and the secondary and tertiary players (and below) want "open standards" — until they become market leaders, and then they start to do the same proprietary stuff. Perhaps DoE just spreads things around to help avoid that.
 

bit_user

Polypheme
Ambassador
The way I'm seeing it, especially while giving reference to core ultra, is they're looking to unload it back to a CPU. But unless they're trying to do something like Google is with their Tensor chips, good luck with that.
Their VPU (i.e. the AI engine in new Meteor Lake / Core Ultra) and AMX (matrix extensions in new Intel server CPUs) are both ways to take little bites out of the discrete accelerator market, but the Gaudi accelerator he referenced is what they're actually using for a frontal assault on Nvidia. This came from a company called Habana Labs, which Intel acquired 3-4 years ago.

People are using GPUs instead of CPUs for a decided reason, it's faster.
Intel is also tackling that market. They have a "Datacenter Flex" series of dGPUs, which are essentially server-oriented versions of their ARC/Alchemist dGPUs. Then, they have their "Datacenter Max" GPUs, which are the monstrous PonteVecchio processors that are powering supercomputers.
 

bit_user

Polypheme
Ambassador
Isn't OpenCL itself basically flagging?
It's still going! Yes, it's lagging behind, but Intel is probably the biggest player still pushing it. There's also the SYCL standard (like OpenCL, it's developed via the industry consortium, Khronos), which builts upon OpenCL to support a single-source, C++ programming model, like you can do with CUDA.

I've dabbled with SYCL, using Intel's toolchain, and it's pretty impressive. I know OpenVINO uses OpenCL, at least for running on GPUs. Not sure if it relies on SYCL or goes straight to OpenCL.

I know I have talked with at least a couple of people at companies where they cancelled OpenCL support after previously having it because it wasn't working as well as they wanted.
Eh, the OpenCL working group is pretty salient about its ongoing challenges. I've seen a good presentation they made on this, but trying to find it...

I wouldn't write it off, as there's really nothing comparable. Vulkan Compute is often pointed to as an alternative, but not for scientific/financial applications that require higher precision. Also, OpenCL is much easier to use than Vulkan. It'd be interesting if the Chinese end up reinvigorating OpenCL.

BTW, Linux/Mesa has a recent OpenCL re-implementation written in Rust, called Rusticl. It has the potential to extend OpenCL support to any GPUs with Mesa drivers. Mesa is the userspace portion of the open source graphics stack, on Linux. Basically, all of the open source GPU drivers use it and can therefore potentially be supported by Rusticle (note: Rusticle doesn't support all Mesa/Gallium3D drivers automatically - some rusticle work is needed for each driver).

There's also an effort exploring OpenCL for distributed compute (e.g. in HPC applications).
 

jasonf2

Distinguished
"IF" it becomes the standard? That's what AMD and Intel are fighting against: It is already the standard for many/most AI workloads. Nvidia got way ahead of the competition, who is now playing catchup. But Pat's absolutely not wrong about many groups in the industry wanting to get away from CUDA and move to open standards. Let me cite three great examples of this:
  1. Frontier supercomputer
  2. El Capitan supercomputer
  3. Aurora supercomputer
All three indicate that the Department of Energy is very much interested in moving away from Nvidia hardware and CUDA, with the first two being all-AMD and the last being all-Intel. Of course, Nvidia would point to the Grace Hopper superchips as being unavailable back when these were commissioned. Perhaps it will start winning a bunch of US government contracts for future supercomputers, now that it has both CPU and GPU assets.

But make no mistake: The US government helped bootstrap CUDA back when it was first created. It needed an accessible programming language for HPC and other workloads, and Nvidia GPUs were the leaders at the time. The issue is that CUDA was proprietary and that has come back to bite them in the butt, so this time all the work on things like ROCm, HIP, OpenVINO, etc. is required to be open.

Fool me (the US gov't) once, shame on you. Fool me twice, shame on me. The major players really do want open standards to prevail, because it means they can go after the best hardware with a standardized (more or less) software ecosystem, rather than having to work on porting everything from CUDA to ROCm or OneAPI or OpenVINO or whatever.
Cuda's biggest issue with these entities is the lack of vendor diversity, not the standard itself. Perhaps an easier solution here would be something closer to what Intel had to do with the initial X86 licensing and IBM. If Cuda is to be the standard that is fine, but it has to license it to the competition if it wants to be installed/utilized in government contracts. Probably a little late for this, but developing an open standard that will compete with CUDA directly would require Nvidia to have some motivation to be on board, which they don't. At least licensing is a win win for the industry as Nvidia gets fees and the standard becomes accessible to other vendors.
 

bit_user

Polypheme
Ambassador
If Cuda is to be the standard that is fine, but it has to license it to the competition if it wants to be installed/utilized in government contracts.
I'm not aware of such a policy. This requirement might be adopted on a project-specific basis, but I highly doubt it exists as a blanket policy.

Probably a little late for this, but developing an open standard that will compete with CUDA directly would require Nvidia to have some motivation to be on board, which they don't. At least licensing is a win win for the industry as Nvidia gets fees and the standard becomes accessible to other vendors.
OpenCL was developed as an open standard to counter CUDA. Last I checked, the OpenCL working group was chaired by one of Nvidia's VPs. I'm not sure where to see who currently occupies that position.
 

abufrejoval

Reputable
Jun 19, 2020
337
236
5,060
No, like OpenPOWER did, it would come too late.
Well it was really more meant as: if you want other people's crown jewels, you better be ready to offer some of your own...
You raise a good point about the $Billions being spent on accelerators that will just become e-waste, as future AI models exceed their capacity and render them obsolete. Most of these aren't even built as PCIe cards you could put in a standard PC.
Microsoft is supposed to spend 50 billion in 2024 alone on AI data center buildout. Google is investing similar numbers for their TPU infra. That's somewhere near the GDP of Estonia for each of them and that money needs to offer a sizeable return within say five years.

All Nvidia Hopper assembly capacity for 2024 is allocated today with Microsoft and some of the other giants getting near all of it. The giants have gone all-in, bought near total exclusivity, bet the future of their company on AI becoming a viable commercial product for the masses, but I completely fail to see the value of AI under the family christmas tree, no matter how much inference acceleration PC and mobile makers are putting into end-user devices.

And that exclusivity is not going to last past 2025 by which time they might be obsolete or the (x00 billion) bubble might have burst.

I've run all kinds of LLMs on my RTX 4090, up to 70b Llama-2 (albeit with somewhat ridiculous quantizations for that one), also Mistral 7B, JAIS 13+30B, even the completely ridiculous Yi model from 01.ai and they are just laughable in how badly they hallucinate today.

How that's supposed to get better once you squeeze them into the edge, I just don't see.

This is much worse than the cryptocalypse, because much smarter people are gambling this time around...

However, some do still use PCIe form factor. If anyone wants a deal on some previous-generate GPU compute hardware, checkout Nvidia V100's on ebay. At 7 TFLOPS of fp64, it's still way more than you can get on consumer GPUs (if you actually need that sort of thing).

I've got a couple of V100 in my corporate lab. Back then when FP16 was a thing, they were pretty nice.

And just as a finger exercise I also ran the Superposition demo in fluid interactive game mode on them across 600km distance using my ultra-thin notbook as a remote screen. Most of my colleagues didn't even realize that it wasn't my Whiskey Lake ultrabook that ran the graphics, one of the weakest iGPUs of recent history...

Yes, remote gaming would be possible in theory, more attractive if you don't pay the electricity bill.

For LLM training they are way too small, for inference they don't support the lower precision data types and sparsity that it takes to compete today. They are very near unusable for current AI and an RTX4070 will push them against the wall on pretty near every workload. No DLSS, either, and gaming has just moved to the point where eye candy needs it.

Consumers would have to find ways of cooling them, because they come only with shrouds. Supermicro sold them in a workstation chassis with external fans, that at least weren't quite the turbines you have in the data center.

All these Hoppers are much worse at any other purpose, all these TPUs are turning into e-waste as we speak. I got a couple of K80s in storage, quite capable cards, too, at FP64, but out of favor even with CUDA these days and zero residual value.

I'm trying to imagine five years from now and I'm just glad I'm as invested into AI as much as I was in IoT: dabbling is fun, but I wouldn't want my pension fund in on that gamble, either.
 
  • Like
Reactions: Order 66

bit_user

Polypheme
Ambassador
I've run all kinds of LLMs on my RTX 4090, up to 70b Llama-2 (albeit with somewhat ridiculous quantizations for that one), also Mistral 7B, JAIS 13+30B, even the completely ridiculous Yi model from 01.ai and they are just laughable in how badly they hallucinate today.

How that's supposed to get better once you squeeze them into the edge, I just don't see.
Remember that LLMs are just one kind of technology. They were never meant to solve all AI problems.

My opinion is that AI assistants of the future will work more like humans do, in the sense that they'll become more adept at translating your queries into searches, and then will examine the results and digest them for you. That minimizes the opportunities for hallucination, because you're not depending on the AI to be an all-knowing oracle.

However, AI isn't just about LLMs or image generators - those are just the latest, most popular fads.

I've got a couple of V100 in my corporate lab. Back then when FP16 was a thing, they were pretty nice.

And just as a finger exercise I also ran the Superposition demo in fluid interactive game mode on them across 600km distance using my ultra-thin notbook as a remote screen. Most of my colleagues didn't even realize that it wasn't my Whiskey Lake ultrabook that ran the graphics, one of the weakest iGPUs of recent history...

Yes, remote gaming would be possible in theory, more attractive if you don't pay the electricity bill.
That demo/benchmark is non-interactive, so latency didn't come into play. Essentially, all you did was the equivalent of a live stream.

For LLM training they are way too small, for inference they don't support the lower precision data types and sparsity that it takes to compete today.
That's why they're on ebay for cheap. I said people needing fp64 horsepower should take a look.

Consumers would have to find ways of cooling them, because they come only with shrouds.
Yes, either put them in a server chassis with enough airflow, or you'll have to rig some serious fans in a conventional PC case.
 
  • Like
Reactions: Order 66

abufrejoval

Reputable
Jun 19, 2020
337
236
5,060
That demo/benchmark is non-interactive, so latency didn't come into play. Essentially, all you did was the equivalent of a video conference.
you can switch it into interactive mode and run around in the lab. Latency was surprisingly good, considering that I was using VirtGL/TigerVNC to make it happen and it was 600km between my laptop and the V100.

Steam remote play should be better, but I never tried that on the V100.
That's why they're on ebay for cheap. I said people needing fp64 horsepower should take a look.
Most of the HPC stuff that runs on FP64 can't deal with the limited memory capacity of these GPUs without significant rewrites. And the vector extensions on newer x86 CPUs close the window of opportunity from the other side, because that often just requires a compiler update to exploit.

I've been trying to put these K80s to a "good new home" and failed. I'm seeing only tiny windows of opportunity for V100, mostly in educational settings.
 

abufrejoval

Reputable
Jun 19, 2020
337
236
5,060
Remember that LLMs are just one kind of technology. They were never meant to solve all AI problems.

My opinion is that AI assistants of the future will work more like humans do, in the sense that they'll become more adept at translating your queries into searches, and then will examine the results and digest them for you. That minimizes the opportunities for hallucination, because you're not depending on the AI to be an all-knowing oracle.
That was my hope, too, that you could basically use the LLMs for their "linguistic skills" and combine them with some type of expert system for hard ground truths: from what I heard that was the approach IBM's Watson was taking some years back.

But my researchers tell me that you just can't keep LLMs from hallucinating, even if you fine tune or quote your hard ground truths into the models or prompts.

Perhaps that will get solved, but will they manage before the current hardware has expired?

I'm less concerned about this not being solvable than I am by the momentum of these investments: they can't really afford to stumble in their execution when hundreds of billions are at stake and workloads get baked into bespone hardware, that just won't perform competitively at anything off the design path.
 
  • Like
Reactions: Order 66

bit_user

Polypheme
Ambassador
Most of the HPC stuff that runs on FP64 can't deal with the limited memory capacity of these GPUs without significant rewrites.
If it already supports CUDA or OpenCL 1.2, then it should be fine.

And the vector extensions on newer x86 CPUs close the window of opportunity from the other side, because that often just requires a compiler update to exploit.
Sure, if you have like $20k+ to blow on the latest dual-CPU server. Even then, they just barely match the V100's memory bandwidth.

I'm seeing only tiny windows of opportunity for V100, mostly in educational settings.
Eh, probably not so tiny, unless you have ready access to the big servers mentioned above. For long-running jobs, the V100 would be a clear win over renting time in the cloud.
 
  • Like
Reactions: Order 66

jasonf2

Distinguished
I'm not aware of such a policy. This requirement might be adopted on a project-specific basis, but I highly doubt it exists as a blanket policy.


OpenCL was developed as an open standard to counter CUDA. Last I checked, the OpenCL working group was chaired by one of Nvidia's VPs. I'm not sure where to see who currently occupies that position.
To my knowledge no such policy exists as well. Just an idea based on IBM's requirement that the X86 standard was licensed to other companies to diversify the vendor base in order for Intel to get the contract for the original PS lines. And while the Khronos group has some awesome logo stickers the last opencl (3.0) (Which is actually Apple owned and licensed to the Khronos Group) standard hasn't been updated since 2020. Cuda on the other hand has an almost continuous update cycle (every month or two) with the last one being in October of 2023. That isn't to mention the software cycle kept up but Nvidia is also mating hardware development to the software stack. When a company gets out ahead like this, has a performance monopoly, and industry wide software developer support it is very difficult to replace. This is especially true in bleeding edge stuff where performance is so critical.
 
  • Like
Reactions: Order 66

abufrejoval

Reputable
Jun 19, 2020
337
236
5,060
If it already supports CUDA or OpenCL 1.2, then it should be fine.

Sure, if you have like $20k+ to blow on the latest dual-CPU server. Even then, they just barely match the V100's memory bandwidth.


Eh, probably not so tiny, unless you have ready access to the big servers mentioned above. For long-running jobs, the V100 would be a clear win over renting time in the cloud.
Most of the HPC stuff my former colleagues at Bull were trying to put on their Sequanas was OpenMP.

And the then DWD (German weather service) CIO told me that rewriting their climate models to even a new fabric (Cray at the time), was too many years to even consider. There is a lot of HPC and engineering software out there, that would fit ideally on FP64 GPUs, but would cost vastly more to adapt than any hardware savings might return. Car crash simulations and seismic models where quoted as prime examples for that. They put a lot of hope into Xeon Phi at the time, because that seemed to avoid the GPGU rewrites. Fujitsu's Aurora was another contender there, because it reduced the refactoring effort.

But sure, if your workload already is CUDA FP64, V100 could be great, especially since you can't buy anything really up-to-date for your finite elments design stuff or fluid dynamics: it's all taken by the AI guys and mostly a theoretical market.

And if you're in the FP64 arena, you can't even escape into consumer territory, because you can't afford to run your engineering simulations without ECC: that's where glitches cause serious damage or loss of lives.

Everything long running typically becomes cheaper on-premise than on the cloud. But the biggest issue currently is that there is almost no working market left: V100 are about the only thing you can actually still rent in the clouds; Ada, Lovelace or Hopper quite simply cannot be booked unless you commit to three years of exclusive use at cutthroat prices.

So yes, if your computing needs fit a V100, you may get unusually lucky today.

I just don't see that being very many people, who don't already have them.
I'd like to be wrong, because it's sad to see these cards being trashed.

But that's nothing to the shiploads of TPUs that were, are and will get retired and even crazy games won't put DGX systems into their cellars, because the light goes out when they turn them on.
 
  • Like
Reactions: Order 66

Order 66

Grand Moff
Apr 13, 2023
2,157
903
2,570
I'm all for expanding open-source alternatives, but are there examples (in general) of open-source solutions being better than the things they are replacing?
 
Status
Not open for further replies.