News Microsoft prepares DirectX to support neural rendering for AI-powered graphics — a key feature of the update will be Cooperative Vector support

I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?

This feature allows AI tasks to run in different shader stages, enabling efficient execution of neural networks, such as in a pixel shader, without monopolizing the GPU
And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.
Microsoft has confirmed that Cooperative vectors will leverage Tensor Cores in Nvidia's new RTX 50-series GPUs to enable neural shaders
...except when they use dedicated AI hardware anyway?

Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days.
 
ahh yes lets help devs cut corners optimizing games even more e_e..

By offloading complex rendering tasks to AI, this approach improves both performance and visual fidelity while reducing the computational burden on traditional rendering pipelines. Technologies like Nvidia’s DLSS and AMD’s FSR have already demonstrated the potential of AI-enhanced rendering.

...are we ignoring the fact that its not all good stuff? there ARE downsides to using it....
 
  • Like
Reactions: iLoveThe80s
Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days.
to catch a thief, send a thief. toss it all into chatgpt and ask it to decode it for you :)
 
  • Like
Reactions: Loadedaxe
I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?


And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.

...except when they use dedicated AI hardware anyway?

Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days.
Basically it’s reducing the load on the ROPs snd doing much of that computation that the ROPs normally would.through matrix math instead because it’s quicker and more power efficient.
 
I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?
It's the level of integration that's different. I'm pretty sure DirectML operates in a different, high-level context, limiting your ability to utilize it from within the graphics pipeline and increasing the overhead of doing so. This new change allows simple AI models to be used directly within shader invocations.

And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.
If a GPU supports the extension, then the GPU should run them on tensor cores or whatever is the best alternative it has. No matter what, it's going to run on something within the GPU, though. There's no way it's shipping packets of instructions & data off to a separate NPU, or anything like that.

Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days.
One thing this brings to mind is Jensen's comments about DLSS4 and predictive framegen, where they're conceivably doing some AI fill of areas in the predicted frame that weren't visible in prior frames. That would require executing an AI model directly in the graphics pipeline and is something you might even do from within a shader, if you could.

Neural textures are another use case that comes to mind. However, shaders are used for more than that. For instance, tessellation shaders are used to synthesize geometry on-the-fly, and neural compression techniques should also be applicable there. Geometry shaders execute even earlier in the pipeline and can control instancing and transformations, making them an option for implementing AI-driven kinematics.
 
Basically it’s reducing the load on the ROPs snd doing much of that computation that the ROPs normally would.through matrix math instead because it’s quicker and more power efficient.
None of this aligns with my understanding of ROPs.

Yes, you could substitute some functions usually handled by a ROP using shader code, but it wouldn't be more efficient than letting a hard-wired ROP handle it (which is why hard-wired ROPs are a thing). The only reason you'd do so would be functional, not for efficiency's sake.
 
What a bad idea!!
AI and neural technologies are still under heavy development and changing a lot year over year.
So, what’s the point to bloat dx12 with some tech that will age like fine milk?
Dx12 point was to get a cleaner api, similar to modern Mantle/Vulkan and Metal2. Doing that is cancelling dx12 main quality.
 
  • Like
Reactions: jlake3
What a bad idea!!
AI and neural technologies are still under heavy development and changing a lot year over year.
So, what’s the point to bloat dx12 with some tech that will age like fine milk?
They're just giving developers the ability to dabble with it. I think the primitives they exposed are fairly simple and things you could implement using conventional vector primitives, on GPUs which lack things like Tensor Cores. IMO, it's not really tying us down to current GPU hardware any more than we already are.

Anyway, Tensor cores came out like 7 years ago and haven't changed much (aside from adding support for more data types), since then. In GPU terms, that makes them quite stable.

Dx12 point was to get a cleaner api,
That's the host API layer, and it is. This is talking about the shading language, which is a separate matter.

similar to modern Mantle/Vulkan
They don't even have a shading language. That should tell you just how little they were concerned with that aspect! Developers using Vulkan still had to write their shaders in something like GLSL, until Microsoft recently opened up HLSL for use in it.
 
Maybe this will be a use for the NPU in AI PCs that people actually want. Like some enhanced upscaling or post processing?
That's not what it sounds like. If this is letting game devs embed inferencing in their shaders, then there's no way it's going to be offloading anything to a NPU. The communication latency and overhead of doing that would be orders of magnitude too high.
 
"Microsoft’s open approach could democratize access, fostering greater innovation and competition"

I mean, it's not Vulkan, so "open" is a bit generous IMO.

Are they ready to call it DirectX 13, or we'll just stay on 12 until Windows 12 drops? Or better yet, you know MS would like to call it DirectXAI.

I just like how Microsoft made a product plug to the nVidia 50 series.
 
Are they ready to call it DirectX 13, or we'll just stay on 12 until Windows 12 drops? Or better yet, you know MS would like to call it DirectXAI.
I think you're on to something.

I'll bet the next big DX rev will incorporate some major accommodations for neural rendering. I'll bet they're still trying to figure out what sort of pipeline changes that would entail, which is why they want to enable devs to do more experimenting and see which ideas and approaches stick.