News AMD splits ROCm toolkit into two parts – ROCm AMDGPU drivers get their own branch under Instinct datacenter GPU moniker

I believe this is basically what Nvidia does. Each version of their compute device driver supports a certain range of CUDA versions. There are also different compute drivers for various ranges of GPU generations. Then, the compute driver and the CUDA runtime are typically installed as separate packages, so you can choose the driver according to which GPU you have, which kernel you have, and which versions of CUDA you want to run.
 
That's right. Nvidia also releases separate packages for their kernel driver.

I think amdgpu now has a stable enough API that it doesn't need tight synchronisation with ROCm. Another reason to remove the driver from the toolkit is because it is and will further be integrated into the Linux source tree.
 
  • Like
Reactions: bit_user