It makes sense since if CUDA becomes the standard, they can steer the direction or add hoops.
"IF" it becomes the standard? That's what AMD and Intel are fighting against: It is
already the standard for many/most AI workloads. Nvidia got way ahead of the competition, who is now playing catchup. But Pat's absolutely not wrong about many groups in the industry wanting to get away from CUDA and move to open standards. Let me cite three great examples of this:
- Frontier supercomputer
- El Capitan supercomputer
- Aurora supercomputer
All three indicate that the Department of Energy is very much interested in moving away from Nvidia hardware and CUDA, with the first two being all-AMD and the last being all-Intel. Of course, Nvidia would point to the Grace Hopper superchips as being unavailable back when these were commissioned. Perhaps it will start winning a bunch of US government contracts for future supercomputers, now that it has both CPU and GPU assets.
But make no mistake: The US government helped bootstrap CUDA back when it was first created. It needed an accessible programming language for HPC and other workloads, and Nvidia GPUs were the leaders at the time. The issue is that CUDA was proprietary and that has come back to bite them in the butt, so this time all the work on things like ROCm, HIP, OpenVINO, etc. is required to be open.
Fool me (the US gov't) once, shame on you. Fool me twice, shame on me. The major players really do want open standards to prevail, because it means they can go after the best hardware with a standardized (more or less) software ecosystem, rather than having to work on porting everything from CUDA to ROCm or OneAPI or OpenVINO or whatever.