I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?
It's the level of integration that's different. I'm pretty sure DirectML operates in a different, high-level context, limiting your ability to utilize it from within the graphics pipeline and increasing the overhead of doing so. This new change allows simple AI models to be used
directly within shader invocations.
And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.
If a GPU supports the extension, then the GPU should run them on tensor cores or whatever is the best alternative it has. No matter what, it's going to run on something within the GPU, though. There's no way it's shipping packets of instructions & data off to a separate NPU, or anything like that.
Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days.
One thing this brings to mind is Jensen's comments about DLSS4 and predictive framegen, where they're conceivably doing some AI fill of areas in the predicted frame that weren't visible in prior frames. That would require executing an AI model directly in the graphics pipeline and is something you might even do from within a shader, if you could.
Neural textures are another use case that comes to mind. However, shaders are used for more than that. For instance, tessellation shaders are used to synthesize geometry on-the-fly, and neural compression techniques should also be applicable there. Geometry shaders execute even earlier in the pipeline and can control instancing and transformations, making them an option for implementing AI-driven kinematics.