renz496 :
the thing with DX12 is the job was supposed to be done by game maker. it was the main pitch behind DX12: more direct control for game developer instead relying on IHV support. but realistically if you have been paying attention to what's happen since the introduction of DX12 with windows 10 majority of game developer have no desire to do it themselves. recently Gear of War 4 rolling out the very long awaited multi GPU support that everyone has been waiting. but to use multi GPU you still need one of nvidia latest driver that include multi GPU support for the game. and here i thought DX12 multi gpu is totally independent from IHV drivers (they can like how it was done on Ashes hence two 1060 in tandem able to work together in that game despite nvidia officially did not support SLI for the card). meaning even if you're using old drivers multi gpu should work as long as you patch the game to the latest update. that's why some people said DX12 will be the last nail in killing multi gpu. because you're pushing the responsibility from the one that want to push the tech (GPU maker) to the one that try to avoid using the tech as much as possible (game maker).
That's not exactly true - a game had to be tailored for multi GPU in older revisions of DX, and those usually had to be identical (Nvidia's SLI) or at the very least from similar generations (AMD's CrossFire) to work at all - nevermind the need for validated drivers, which are a requisite for it to work, or the limited kinds of load sharing you could perform: alternate frame rendering, scanline rendering were pretty much all you'd get. DX12 allows compositing rendering, where GPUs work on different objects and then one composites them together on-frame.
DX12 makes it so that you can mix and match whatever hardware resources you have, provided you go and make use of them the same way you' d go and detect what CPU core you have, how fast and how any of them there actually are. Of course, that requires from the game maker to detect and probably benchmark the capabilities of whatever GPU hardware it can find (it does add complexity) but this is neither an unknown (see CPU cores) nor repetitive: once the graphics engine is geared towards this kind of detection and balancing, it's DONE - no need to look further. So of course engine makers have some work ahead of them, but most of them (or at least, the good ones) actually enjoy having more capabilities: straightforward API geared towards harnessing more resources are much easier to work with than finding workarounds and hacks to do the same.
Now of course, whenever a hardware maker simply shuts down his GPU when another is used for rendering makes all of this moot: Nvidia didn't approve of the use of their GPU for PhysX computations when actual rendering was done on an AMD card and wrote a shutdown routine in their drivers, but this is actually the kind of load balancing DX12 (and soon, Vulkan) would allow.