CUDA was something no one else had and unless AMD comes up with something that's actually revolutionary like CUDA was they can't recreate CUDA's success.
My point wasn't to dig into deep history, but OpenCL came along soon enough after CUDA and before GPU Compute really took off. There was a window where the industry could've turned away from CUDA, but sadly the key players who could've made it happen (Google, Apple, and Microsoft) instead pursued their own compute APIs and Nvidia was successfully able to exploit the resulting fragmentation.
Anyway, what AMD did, like 5 years ago, was to make a CUDA compatibility layer and toolchain, which greatly streamlines the process of porting CUDA code. So, AMD has basically done the best it can to nullify the API-level advantages of CUDA.
They've also ported most of the popular deep learning frameworks to use their GPUs, so that most AI users (ideally) should see AMD's solution as a drop-in replacement. Now, it's obviously not going to be completely seamless, especially for more advanced use cases. That's why you want a group of motivated, capable, and resourceful users (like university students and post docs - hence my suggestion). I'm sure a lot of folks at university are currently starved for AI training time, so they're already highly motivated to use an alternate solution, if one were available to them.
Nvidia was on their 4th generation of Tensor Cores when AMD released their first generation "AI Cores". Even though the 7000 Series GPUs all have AI Cores AMD still isn't utilizing them even though you paid for them
I'm not sure what you're even talking about, here. In CDNA (MI100 being launched 4 years ago), AMD introduced Matrix cores, which were actually quite a bit more general than Nvidia's Tensor cores of the day.
Conversely, even as of RDNA3, their client GPUs don't actually have tensor or matrix cores. WMMA (Wave Matrix Multiply-Accumulate) instructions rely on the existing vector compute machinery, rather than adding any dedicated, new matrix-multiply pipelines.
XDNA is another thing, entirely. It didn't come from their GPU division and currently has nothing to do with either their client or server GPUs.
they just don't seem to have the software chops to make an AI driven version of FSR like Nvidia's AI driven DLSS.
This article really isn't about their client GPUs. So, I'm not even going to touch the subject of FSR, because that's even more irrelevant.