Directx 12 compatibility?

Speedstang

Reputable
Feb 14, 2015
423
0
4,810
Well I've been planning on purchasing a GTX 750 ti, but I've heard people say that any pre 900 series Nvidea GPU's won't work that good with directx 12, they will be slower than directx 11. Is this true?
 
Graphics cards with DX12 & Win10 will benefit from multi core instructions once games start being designed for it. Meaning if you have a multi core cpu, DX12 will allow the graphics card to communicate with all cpu cores simultaneously instead of just the one with Dx11.

Graphics cards pre-Dx12 will benefit a little bit with DX12 driver software but at hardware level is where DX12 superior performance in DX12 games will begin.

 
What seems to be the case at the moment is, NVidia's DX 12 performance is only marginally better than their DX 11 performance, which was actually quite decent. However, if asynchronous shaders are used in the game engine, NVidia's DX 12 performance tanks. This may or may not improve in the future for 900 series cards. 700 series cards, excluding the 750 ti, since it's a Maxwell based card, don't claim the same feature set, and as such, I don't believe they even expose asynchronous shaders for game developers to attempt to use.
 


Async compute has been a feature of CUDA/nVidia GPUs since Fermi.https://www.pgroup.com/lit/articles/insider/v2n1a5.htm

NVIDIA GPUs are programmed as a sequence of kernels. Typically, each kernel completes execution before the next kernel begins, with an implicit barrier synchronization between kernels. Kepler has support for multiple, independent kernels to execute simultaneously, but many kernels are large enough to fill the entire machine. As mentioned, the multiprocessors execute in parallel, asynchronously.

That's the very definition of async compute.

http://www.guru3d.com/news-story/nvidia-wanted-oxide-dev-dx12-benchmark-to-disable-certain-settings.html

 
Actually, I'm not sure your point, Mousemonkey. I am not stating that Maxwell can't perform asynchronous compute, just that performance on that architecture current takes a significant performance hit.

A more valid point to make would be, what is NVidia actually considering to be asynchronous shading? There are two queues on Maxwell, 1 graphics + a 31 deep compute queue, but the question still remains if both queues can run at the same time, or only in serial, or with heavy penalties for context switching. It's already quite clear that the 31 deep queue works great on it's own, but that's a lot of wasted potential if it can't run asynchronously along side the graphics queue.
 
We were asked to believe that the 970 is a 4 GB card, which it is, from a certain point of view, and we can argue semantics until we're old and gray, but I suspect the card really doesn't do asynchronous shading as folks are being led to believe. NVidia really hasn't clarified the behavior of the card. We know it can run the 31 depth queue all day long in pure compute mode, but can it run both a graphics task and compute task at the same time without heavy context switching penalties?

Ease of use of the asynchronous shader technology didn't seem to be a burden, or the dev mentioned earlier probably wouldn't have gone to the trouble. The problem is the performance of the NVidia cards when the feature is used. The cards are just better off at the moment when the feature is disabled. It really doesn't matter if the feature is supported, whether it's in hardware or if speculation turns out to be true and NVidia is handling the arbitration via a software scheduler, if the feature doesn't bring any sort of improvement to the table. At this point, all NVidia's Maxwell cards are doing is exposing a feature via their drivers for the sake of ticking off a bullet point.
 
Yes, NVidia's cards perform slightly better in DX 12. They only have issues when asynchronous shaders are used, and it has yet to be seen how much developers are going to use that technology in the immediate future. The feature wasn't readily available until DX 12, but they do have it available on both the XBone and PS4, and it was possible with the Mantle API if I understand correctly. By 2016, console ports could very well be pulling some fancy tricks with it. This is something that could stink up NVidia's 900 series later on, if developers get their software going before NVidia gets their feature sorted. I would suspect more attention is being paid in regard to this for Pascal.
 
Guys, seriously, I can't understand any of you. I don't know what "asynchronous shading" and those other random terms are, I just wanna make sure that dx 12 won't make the 750 ti perform worse than if it was using dx 11.
 
The 750 ti will perform as good or better under DX 12 as it does under DX 11. If you're satisfied with it's performance, it will only get better. I definitely wouldn't worry about that. What we're talking about are advanced features that can unlock more performance. If that feature doesn't work, developers don't need to use it, and as it stands, the only developer really talking much about it has it turned off for NVidia's cards so that performance remains as good on NVidia hardware as the competition.
 
Lol u guys makes a lot of assumptions. Poor OP.

Well in general it depends on what feature of DX12 being use. If the game only take advantage CPU overhead reduction then 750Ti should be able to benefit from that optimization. But nvidia driver optimization in DX11 also very good. AoS developer themself admit that they were surprise to see nvidia efficiency in DX11.

As for Async comoute stuff some people make assumption that Async compute as mandatory part of DX12 which is not. Right now i see aclot of debate about it especially ad beyond3d forums. Some even said that AMD Async compute are not really part of DX12 because MS definition of Async Compute and AMD definition of Async compute are not the same. Hence they say the async compute used in AoS probably something that exclusive to GCN hardware. We will know the truth once Nvidia come up with their solution.
 
Who is making the assumption that asynchronous compute is a DX 12 requirement? It certainly isn't a requirement at all as that's the wrong way to look at it. It's a feature that DX 12 exposes for developers to wring more performance from targeted platforms. A DX 12 compliant card doesn't have to support asynchronous shaders anymore than a car needs a radio or a heater to be qualified as a car, but to dismiss support for it may end up being more than trivial.

There's only so many ways you can define asynchronous compute, or asynchronous shaders. In previous DX releases, graphics and compute tasks were scheduled in a serial fashion. In DX 12, that's no longer a binding limitation, unless of course, you're on NVidia's hardware. NVidia's feature, as it appears now, seems to tank performance in part because it routes the asynchronous compute tasks through the CPU, rather than the GPU. If those CPU cycles are needed or there is a lot of compute tasks being done, that can be a significant issue.

You have to make an assumptions at the moment, as NVidia has not addressed the issue openly, only told the developer to turn of asynchronous shaders. You also have to assume, if you believe NVidia will come up with a solution for Maxwell beyond what they already have. It may be an impossibility in current Maxwell hardware without performance penalties.
 
That was pretty good, know more than me certainly. Actually where do you want me and my pitch fork, we'll do a barbaric rally ;D?

Seems DX12 is going to expose some flaws for Nvidia and give the Game of Thrones to AMD. Am i close in saying that?
 
You're welcome to pitch your tent anywhere you like, but really, it may be NVidia holding up PC gaming this generation if they don't address this. Any DX 12 game with heavy use of compute shaders is going to see gains from asynchronous shading, unless of course you just don't have the resources sitting around. The reason we haven't seen heavier reliance on this is simply because the ability to program the features simply wasn't available before XBone, PS4, Mantle, and DX 12. As it stands, AotS has a separate render pipeline from AMD, thanks to their, *cough*, support for asynchronous shaders.
 
Sure, Mousemonkey. I'm basing my current information on what a developer has shared based on their experience working on a project and in working with NVidia on that project. I'm also basing my information on the results of the test program which was written specifically to determine if either AMD or NVidia's GPU architectures are performing compute tasks asynchronously alongside graphics render tasks. When there's more information available, more tests, more games, etc., I'll include that too. Until then, would you rather we base our information on NVidia's silence in the matter?

On the other hand, if several of the first gen DX 12 titles are including asynch shader tech, or even if the first title remotely released to date is, shouldn't that be taken as an indication there might be something to it? I suspect, the only reason developers would turn away from the technology, which essentially gives them free performance gains, would be because NVidia's cards are unable to support it. But this could go down along the lines of PhysX, or it may cause other issues such as the need for separate rendering paths for each vendor, meaning most engines will be heavily weighted for one brand of cards.

Like it or not, DX 12 looks to remove a lot of driver optimization tricks from the performance equation, meaning, NVidia may lose advantage for a while until they can get everything they need baked into the hardware of their GPUs. NVidia clearly has great DX 11 cards. We really can't say they have great DX 12 cards though. What we can say however, is that they have cards that don't hold up as well in AotS as AMD does, and appear not to support compute alongside graphics.

Any idea where that Ark DX 12 patch is at? Last I heard the dev was waiting on NVidia and AMD because of driver issues. I bet they are actually waiting on one company more than the other, but a quick search really didn't reveal a whole lot. Would be nice to add this title's numbers to what we're already seeing. I agree that getting a broader picture would be ideal. Until then, we go with what we have.

Time will tell.
 
Lol nvidia have great DX11 card when in the past they said nvidia card was disadvantage at dx11 because their card are not fully compliant with dx11.1 spec. To be exact nvidia did not hold any advantage with dx11. Just that they support all gaming related feature of DX11. It is AMD that deem some of DX11 feature as 'broken' and decided to go with Mantle. Did AMD card really at dis advantage in DX11? If so we should see similar gap between fury x and titan x/980ti in every dx11 games like shown with AoS benchmark in dx11. Why amd dx11 performance always poor in games that featuring mantle and dx12? But for those game that only dx11 path the performance actually on par with nvidia equivalent?

And there is still a lot of debate regading AMD ACE. Some even said the feature is exclusive to AMD GCN hardware only and not really part of DX12 despite AMD claiming so. Ever since AMD start their Mantle initiative it is no doubt they want to leverage the advantage the fact all big console are using their hardware. This will be interesting to see for sure. Will AMD able to turn the table on the pc side of thing when nvidia hold majority of the market share? Nvidia influence is nothing to scoff at. History has proven this. And their ability to adapt as well.
 
For CPU we actually have VIA but they don't sell regular consumer stuff. 😛

And i've already heard ZEN will be delay to Q4 2016 at the earliest due to problem at GF. For GPU we actually have multiple player but only nvidia and AMD (ATI) decided to dug it out on pc side.....plus intel on the integrated side. Would be fun to see if ARM (Mali) and Imagination Technologies (Power VR) enter pc market. At the very least IMG have the experience before on the dekstop gpu market before they go with their current business model.