New Diagnostic Tools Available For DirectX 12 Developers

Status
Not open for further replies.
I remember when people were rushing to buy AMD cards because of the Mantle promotion. It takes a LOOOOOONNNG time to shift to a new API successfully. Of course it's also hard to break from DX11 so mixing new and old complicates things too.
 

alextheblue

Distinguished
To be fair, Mantle became Vulkan. But yes it takes a while to see extensive use of any new API, doubly so when the new APIs are low-level and quite different from their predecessors.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
I think the most disappointing thing about DX12 has been that no real gameplay titles have shown any significant advantage in using the new API over DX11. What happened to "OMG DX11 is going to choke on high draw call games that are RIGHT AROUND THE CORNER!!"?

I have been impressed with the smooth performance of Vulkan-based Wolf 2 on my systems though, if not its general lack of true mgpu support. However, given that there is no OGL/DX11 version, its hard to say if there really was much of a benefit to using Vulkan.
 


You'd better look at Doom then, as it uses the very same engine Wolf2 is built on - but it also provides an OpenGL 4.5 backend (which compares rather well with DX11, according to developers used to both backends). The boost AMD cards get on Vulkan in Doom is nothing to sneeze at, and is comparable to what titles like Battlefield 4 (Mantle) would get over the DX11 backend.

The main problem is that, currently, most engines are built using the DX11 feature set as a target (and Nvidia cards are geared towards DX11 features), and added backends usually end up mapping DX11 features to DX12 (or Vulkan).

According to a Croteam developer (their Serious Engine was initially developed for DX9, and includes backends for DX11, OpenGL and Vulkan), mapping such an "old" engine to one of the new APIs results in little performance improvements (less draw call overhead is one), however developing an engine to actually making use of an API's strength can really quickly provide huge performance improvements.

Said improvements come mainly through drastically lowering CPU overhead (async compute is one such feature that delegates task management to the GPU when it can handle it - as in GCN 1.1+ AMD cards -, multiple queueing is another). Note that a game that can perfectly load a GPU without getting any CPU bottlenecking through DX11 won't see any increased FPS from using DX12/Vulkan. However, it'll get a much lower, better distributed CPU load.

Just like Wolfenstein II exhibits.
 
I really hope DX12/Vulkan GAME ENGINES get a lot more optimizations in them such as desktop resolution HUD static with dynamic gaming resolution (i.e. 4K HUD and 1400p average res) similar to more modern CONSOLE games.

That, and other DYNAMIC features for both the CPU and GPU that work to optimize the FRAME TIME.

Put another way, let's have the game ENGINE (and thus the game) automatically adjust to the PC hardware so our games are smooth with little to no tweaking of settings.

*I am so sick of starting a game, tweaking the game, then retweaking the game when it stutters, then finding areas that are just so demanding tweaking doesn't help, and so on...
 

Druidsmark

Commendable
Aug 22, 2016
43
1
1,530
Nvidia users benefit from Vulkan in Doom as well, I switched from DirectX 11 to Vulkan and the difference in performance was noticable on my Zotac Geforce 1060, 6gb ram. Using Vulkan I can max the game out and get a solid 60fps every where in the game at 01080p, using DirectX 11 I do notice a slight performance hit in the fps department on my pc. Right now I would say Vulkan is better then DirectX, hopefully once we start to see more optimized DirectX 12 games we can make a better comparison between the two. But with so few DirectX 12 games right now it is hard to say which is really better. I do know Vulkan seems to run better when I switch to it in games that support it over DirectX 11 for now on my pc.
 

Druidsmark

Commendable
Aug 22, 2016
43
1
1,530
Sorry, clearly I got that wrong then I do remember that when switching to Vulkan in the demo that it worked much better for me.
 

hannibal

Distinguished
Yep, the point is that dx12 and Vulcan allow things that would ruin dx11 and earlier, so there is not where to compare.
Those games that can run dx11 and earlier does not use features where dx12 or Vulcan is superior.
 


On Nvidia cards, Vulkan widens the CPU bottleneck but if your CPU is powerful enough you would see no improvement over OpenGL. On AMD cards, you get a 20-40% performance boost even if you weren't CPU limited because the hardware is better used.

DirectX 12 and Vulkan are pretty much the same beast, the former being Microsoft's "me too" when AMD came up with Mantle and Vulkan being what the Khronos group came up with when AMD provided Mantle free of charge as a template for their "next gen API", Intel said it was interesting and Nvidia complained that it wasn't fair to them.

In all cases (Mantle, DX12 and Vk), these APIs aim to lower CPU overhead and driver complexity by giving developers a more direct access to graphic hardware. Engines become harder to program, but they are also able to directly manage hardware resources.
 


Not exactly - it is possible to write a game engine with multiple backends; provided said game engine is able to perform its own resource allocation on Vulkan/DX12/Mantle and delegate it to the driver in OGL/DX11/DX9, can make use of multiple queues when available or squishes everything into a single queue otherwise, then we could get a good idea of how much better performing the next-gen APIs are compared with current-gen ones.

However such an engine would have few reasons to exist, as all current hardware and OSes support either DX12 or Vulkan.

As for Vulkan's mgpu support, it's coming. However, considering the many kinds of mgpu modes available and how little interest there is in supporting them (legacy modes suck and the newer one isn't interesting because it allows mixing'n'matching GPUs from multiple manufacturers - Nvidia doesn't like it, so that leaves AMD and Intel), I'm not sure how much of a hold up it actually is.
 

alextheblue

Distinguished
In general I'd agree that traditional mGPU support is mostly dying, and given the spotty record of such implementations (in terms of frame time, compatibility, performance) I'm not sure I care.

But there are a couple of good use cases. The first is VR: It's easy to implement (at least for matched GPUs), and obviously scaling is going to be awesome. The second would be utilizing the iGPU to offload some postprocessing work and provide a little boost. I've seen demos and it's got potential, and would be more reliable than earlier attempts at this sort of thing. There are lots of systems with an otherwise dormant iGPU. But I think it would only really see widespread support if a high-profile game implemented it successfully first.
 


That's the thing: these setups are mighty interesting for the consumer but (VR) it's not very widespread yet and (delegated processing) allows mix'n'matching that isn't interesting for card makers to optimize for; AMD may do it, especially with Intel GPUs on top of their own (that's why they sprearheaded the HSA initiative and one of the reasons behind Crossfire-X), but every gamer claiming to be "it" says that it's Nvidia you must go to.

Considering that Nvidia is killing off SLI in anything but their premium products and are going as far as introducing artificial restrictions in their drivers, such as not allowing their cards to work in 3D if another card from a different maker is found in the system, and that they are market leaders, why would a studio develop a 3D engine supporting evolved mGPU support if they won't be able to use them on the majority of systems?

In short, improved mGPU will not become a thing until AMD sells more gaming 3d cards than Nvidia does. Then, game developers will need 2-3 years to develop engines making use of that technology. So, maybe in 2025...
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"You'd better look at Doom then, as it uses the very same engine Wolf2 is built on - but it also provides an OpenGL 4.5 backend (which compares rather well with DX11, according to developers used to both backends). The boost AMD cards get on Vulkan in Doom is nothing to sneeze at, and is comparable to what titles like Battlefield 4 (Mantle) would get over the DX11 backend."

We have yet to see a game that was designed from the ground up to take full advantage of dx12/vulkan, agreed (even wolf, I believe started life as an OGL or DX11 title), but the question is, will developers really take the time to (apparently) take on the more intensive workload of working with dx12, while also ignoring a bunch of their consumer base (windows 7, 8 users)? Vulkan would make infinitely more sense, being that it works on all OSs of note (except for a certain Cupertino company's offerings), but we've all seen the mighty hand of MS and its army of SDK writers convince people to take the less sensible route and develop for DirectX.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"In general I'd agree that traditional mGPU support is mostly dying, and given the spotty record of such implementations (in terms of frame time, compatibility, performance) I'm not sure I care."

You're referring to AFR I guess---but true mgpu has been one of the more exciting possibilities of DX12/Vulkan, and if it ever become adopted, most the major issues with past multi card usage would disappear, aka AFR microstutter. Apparently though, it is not as trivial to implement in DX12/Vulkan as I'd hoped, or developers wouldnt need to spend so much time testing it after the original launch, or offering it at all. DX:MD comes to mind---it took them at least a month FWIR to get DX12 mgpu into official release.

I'm sure Nvidia/AMD are in no hurry to see DX12/Vulkan mgpu become widespread, as theoretically it could eat into sales of their high-end cards, being that two 1070s in good non-AFR mpgu would likely beat a single 1080ti/Titan, and be cheaper (in the case of the Titan anyways). So, my guess is that they're not giving devs their best efforts to help implement true mgpu.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"Considering that Nvidia is killing off SLI in anything but their premium products and are going as far as introducing artificial restrictions in their drivers, such as not allowing their cards to work in 3D if another card from a different maker is found in the system, and that they are market leaders, why would a studio develop a 3D engine supporting evolved mGPU support if they won't be able to use them on the majority of systems?"

Nvidia is indeed slowly killing off SLI, but from what I understand, that does not effect DX12/Vulkan *true* mgpu, which if ever became widespread would solve the major issues with AFR-based multi-card rendering (microstutter, etc), but apparently its not as easy to implement in game engines as i'd hoped.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"Those games that can run dx11 and earlier does not use features where dx12 or Vulcan is superior."

Not entirely true---DX:MD, for example, the developers have stated that max settings in both DX11 and DX12 modes will be visually identical (driver bugs for DX12 mode notwithstanding).
 


Features may not affect rendering, however they may affect speed. Case in point: async compute is unusable in OGL and DX11.
 


And as I said, since there is very little incentive for game developers to spend money on something that is nerfed at the driver level (Nvidia does NOT allow mGPU to be used with their cards, shutting their 3D cards off if another chip is detected as in use when a 3D context is created), not only is it difficult to program (AotS is the only game as of now that does "true" mGPU and it has been in development for years), it applies to a very limited subset of systems : AMD APU + AMD GPU.
 


That's why Wolf2 is interesting, as it uses Vulkan (it is thus compatible with Win7/8); and, while the engine is indeed based off id Tech 6 (a natively OpenGL engine), id did spend a lot of time rewiring it to make use of several elements that do not exist in DX11: multiple queues (OpenGL 4.5 can be hacked around to support them, actually) and async compute for shadow processing (that they managed to extract from their main render loop to send off to the GPU's schedulers).

Now, id Tech 6 isn't a "native" Vulkan engine - but it still manages to use 100% of a GPU's power and it can spread its load across all CPU cores (on a 6C/12T CPU) so, while it could be possible to squeeze an extra 10-15% (theoretically) extra speed from a complete engine rewrite, we may need a few more hardware generations before it actually becomes mandatory to do so.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"Features may not affect rendering, however they may affect speed. Case in point: async compute is unusable in OGL and DX11"

Not for DX:MD, i believe that both Nvidia and AMD still show higher framerates in DX11 mode than DX12. AMD might have a higher minimum framerate in dx12 mode though, I cant remember.

Also, i could be mistaken, but although you cannot mix brands/models of cards for DX:MD, i do believe it does NOT use AFR for DX12 mgpu mode, which is primarily my interest for the future of mgpu gaming.

 

alextheblue

Distinguished
Well, as far as when it will be supported, you may be correct. But you're wrong about needing Nvidia's blessing. What are they going to do, disable their graphics card in the presence of an iGPU? With DX12 at least (and probably Vulkan) you actually don't need the support or blessing of Nvidia... at all. Explicit multiadapter + unlinked GPUs. There are already working demos of this. Here let me dig up a link...

http://www.pcgamer.com/directx-12-will-be-able-to-use-your-integrated-gpu-to-improve-performance/

That's with an Nvidia card. 10% boost with some-or-another integrated Intel graphics.
 


I'm not saying it's impossible now (AotS is proof), I'm saying there won't be widespread adoption until the ecosystem is ready; and considering a major GPU provider has dug its heels in, there won't be any artificial acceleration on that trend.

Case in point, if Nvidia is ready to disable its own cards if a competitor's board is found aboard the system, what's to stop them to do the same it if they decide that a newer iGPU is too powerful for their own taste? Like, for example, an embedded Vega iGPU on an Intel processor package...
 
SLI being "killed off" by NVidia is a bit misleading.

First of all, it's AFR (Alternate Frame Rendering) which is a subset of SLI. Things like SFR (Split Frame Rendering) are still SLI (or Crossfire) and perfectly valid options.

AMD has also announced it's dropping support for more than 2xCrossfire (AFR really), and that's due to the extremely limited number of 3+ card owners. It just makes sense.

It's also the GAME ENGINE switch to code that doesn't support AFR very easily that is behind this, not NVidia or AMD. Heck, these companies WANT you to buy multiple graphics cards so it's definitely not in their interests to kill off multi-GPU. (there are good reasons to run code on a SINGLE GPU such as analyzing consecutive frames for repeating patterns similar to how video compression works. Forms of anti-aliasing for example.)

NO. What we have is the killing off of 3+ cards, as well as AFR, but we SHOULD eventually see multiple GPU support in the form of Split Frame Rendering (2+ cards splitting each frame into pieces to work on separately), but it's a lot of work today so we really need most of the work to be put into the game engine by default before game developers even start creating a game with it.

As for not supporting AMD + NVidia well that's not even worth discussing at this point since game engines aren't any where near easily supporting this, and with video card DRIVER support still being important so too is quality control. At some point we'll start significantly reducing the role video card drivers have and shift that towards the game engines controlling things but we aren't quite there yet.
 
Status
Not open for further replies.