News Nvidia teams up with Microsoft to put neural shading into DirectX, giving devs access to AI tensor cores

It sounds like it could lead to some really impressive performance gains vs standard rasterization in a select few games. And lead to some otherwise well performing cards being incapable of acceptable performance in these titles. Will probably be a great selling point for the latest generation of Nvidia cards.

But it could be a great performance and efficiency boost to any GPU with a decent amount of that AI type processing power, if implemented altruistically, which it won't be, at least at first.
 
It seems the partnership between Microsoft and nVidia is due to last, previously DirectX has been built with all actors, now It's only defined by nVidia, AMD and Intel can only follow.
 
What about Vulkan? I'd like to see how quickly the open source community can advance this.
Vulkan already had cooperative vector support for a while, which is how people have been achieving good AI inferencing performance with it. I don't know how much more is involved in MS' new developments, but I think that's the main thing.

Interestingly, that article also mentions GLSL_NV_cooperative_matrix2, which implies you should even be able to do it from OpenGL.
 
What could possibly go wrong for the Industry, right?
I'll admit that I was a little dismissive of neural texture compression as blue skies research, when news first broke about it, a couple years ago. However, when Nvidia started playing up the technology, 2 months ago (during CES), I took a deeper look at the technology and it sounds pretty solid. It is a significant compute vs. memory tradeoff, though. I'm not really sure where it makes the most sense to apply.

As for some of the other applications mentioned, they seem very plausible to me. Nvidia has demonstrated very effective AI denoising that enables realtime global illumination, which doesn't seem so different to me from how you might use AI to interpolate shadows from coarse shadow maps, for instance.

I don't really know how fully neural textures might compare to procedural ones, but we all know textures aren't uniformly great, to begin with. In some cases, they might be higher fidelity, while lower in others.
 
The article said:
Cooperative vectors rely on matrix-vector multiplication, so they need specialized hardware, such as Nvidia's Tensor cores, to operate.
To understand why they need a special feature, you have to understand that the modern GPU programming model is to treat each SIMD lane as a scalar "thread". Because these matrix operations are intrinsically multi-lane, they break that model wide open and effectively force "cooperation" between those "threads". Underneath, there's nothing special going on. If GPUs' SIMD were exposed like the SSE/AVX-style vector instructions used to implement it, then this wouldn't be a big deal. However, that would make GPU shaders harder to program, which is why the SIMD is hidden from view.

The article said:
To that end, they can potentially work on Intel's XMX hardware as long as they meet Microsoft's requirements. They may also work on AMD's RDNA 4 AI accelerators, though RDNA 3 seems more doubtful (as it lacks AI compute throughput compared to the competition).
RDNA 4 should be in good shape for this. RDNA 3 will probably do fine, in a similar sense of how it did with ray tracing. In some games that use these techniques, they might disproportionately impact RTX 2000 and RX 7000 GPUs, so you'd want to keep an eye on those 1-percentile FPS and turn it off, if they're dropping too low.
 
It seems the partnership between Microsoft and nVidia is due to last, previously DirectX has been built with all actors, now It's only defined by nVidia, AMD and Intel can only follow.
If this had come out around the same time as DXR, back in the RTX 2000 era, I'd definitely agree. However, AMD has had over 6 years to catch up. And Intel saw the writing on the wall and put the equivalent of tensor cores into Alchemist, their first proper dGPUs. So, the timing doesn't really concern me.
 
  • Like
Reactions: Penzi
I'll admit that I was a little dismissive of neural texture compression as blue skies research, when news first broke about it, a couple years ago. However, when Nvidia started playing up the technology, 2 months ago (during CES), I took a deeper look at the technology and it sounds pretty solid. It is a significant compute vs. memory tradeoff, though. I'm not really sure where it makes the most sense to apply.

As for some of the other applications mentioned, they seem very plausible to me. Nvidia has demonstrated very effective AI denoising that enables realtime global illumination, which doesn't seem so different to me from how you might use AI to interpolate shadows from coarse shadow maps, for instance.

I don't really know how fully neural textures might compare to procedural ones, but we all know textures aren't uniformly great, to begin with. In some cases, they might be higher fidelity, while lower in others.
Just one of my usual cynical takes.

If nVidia is willing to make it part of a standard any vendor can implement in their hardware, then I'm totally fine with it. Will they do a no-strings-attached approach like AMD with MANTLE and DX12? I honestly doubt it.

Regards.
 
If nVidia is willing to make it part of a standard any vendor can implement in their hardware, then I'm totally fine with it. Will they do a no-strings-attached approach like AMD with MANTLE and DX12? I honestly doubt it.
The article says the low level support is being standardized in DirectX. So, that means you can write portable HLSL code which uses tensor cores. That's a definite step forward.

As for neural textures, that's definitely a Nvidia-proprietary thing, for the time being. We'll see if Khronos steps up with a comparable cross-vendor standard.
 
  • Like
Reactions: -Fran-
FYI, I've updated the article after receiving a statement and link from Intel:
https://community.intel.com/t5/Blog...e-Vectors-with-Microsoft-at-Game/post/1674845

So Nvidia and Intel are definitely in. AMD has not yet responded, but I would assume RDNA 4 at least will get support. Otherwise, that would be a serious oversight if AMD couldn't make this work on its latest GPUs!

I'm still not sure if RDNA 3 will have the necessary compute, but it seems plausible at least. If it can work on Intel Arc iGPUs, which are relatively weak compared to desktop GPUs, RDNA 3 higher spec models should be okay at the very least.
 
FYI, I've updated the article after receiving a statement and link from Intel:
https://community.intel.com/t5/Blog...e-Vectors-with-Microsoft-at-Game/post/1674845

So Nvidia and Intel are definitely in. AMD has not yet responded,
Huh. MS usually tells its hardware partners, well in advance, about any changes to DirectX. I heard they gave AMD about 1 quarter advance notice of DXR (ray tracing), and that was considered abnormally short. So, AMD should've been aware this was coming for months.