Question Is it possible that multi-DGPU systems will be able to be built soon?

Gamefreaknet

Commendable
Mar 29, 2022
336
15
1,685
Similarly to the concept of Dual-GPU systems like an Intel IGPU paired with an Nvidia or AMD card is it possible that systems will soon be adjusted to support multiple GPUs (disregarding case size(s)).
The foundation comes off the factor that whilst older GPUs like the Nvidia 10 series lineup and similarly aged GPUs that are soon to become "obsolete" or simply not up to the task of handling the evolving graphics the new cards are being made for, just like a Dedicated GPU and IGPU will share the load based on the systems "requests" couldn't this be possible sometime by having maybe 2 or 3 dedicated GPUs paired (from the same manufacturer - Nvidia or AMD or even Intel Arc)?
This would likely allow for such moves as instead of throwing out the RX, or GTX 10 series cards adding another GPU to lighten the load of the single GPU you might be running today so whilst you might not have Ray Tracing like you could have if you bought an RTX card you're system (i'm assuming) if graphically pressured would have a lighter load and could theoretically handle more games at better performance (disregarding water cooling or Airflow issues that may arise from having a multi-DGPU setup?
Of course for such performance games to come out visually assuming HDMI is still in use it would likely need a multi (HDMI) connnector for your monitor(s) being used and the data transfer load would likely be higher meaning the Display/HDMI cable would likely limit the quality to what the GPUs can deliver.
Is something similar to this already possible or in development or is it just "rely more and more on newer and newer GPUs whilst the old GPUs we throw out, whilst they still work, are not powerful enough so we need to buy the new GPUs instead of combining old GPU performance" which would likely reflect similar performance (results) to newer, more "current" GPUs?
 
Last edited:
Also the new Intel cards seem to support ray tracing. AMD has a way to go, but manufacturers used to support sli and crossfire and got away from doing that so i doubt they go back to doing that. Besides modern cars are getting powerful enough to where I don’t know that there’s a major need to run dual cards these days.
 

Eximo

Titan
Ambassador
Well, the 3090 and 3090Ti still support NVLink SLI. AMD dropped Crossfire a while back after Vega. RTX4090 does not (though some of the PCBs show the traces were put in the board, but not populated/connected)

Most people expected DX12 multi-GPU to take over, but that didn't happen. Too much work/cost for developers for the .1% that do it.

GPUs are getting so large and power hungry at the moment, it just doesn't make sense. AMD is going multi-chip GPU already, and the rest are expected to follow suit in a few generations. This eliminates a lot of the problems of getting data between two cards.

So no software support, no hardware support. A lot of motherboards no longer support SLI or Crossfire either. Even some of the high end boards don't bother with SLI certification.
 
Various systems have been attempted. SLI and Crossfire for essentially identical GPUs running. LucidLogix's Virtu, NVIDIA's GeForce Boost, and AMD's Hybrid CrossFireX for using an iGPU with a dGPU. And lastly DirectX 12's multi GPU (which was used in Ashes of the Singularity). None of them took off and most of them are practically abandoned.

The biggest problem is that graphics rendering is it's a real-time compute problem. That is, if you need to target 60 FPS, you must generate a frame within 16.6ms. Anything that adds latency, jitter (variance), or is unpredictable starts screwing this up. So it's not a good idea to put a stronger GPU with a weaker one, because the weaker one, no matter what you do, will add a significant amount of those three issues. Like you can't have an iGPU render whole frames with the dGPU because you'll have stuttering. You can't really offload tasks from the dGPU to the iGPU because depending on what you give the iGPU, it'll just take as long as if it would've been done on the dGPU or worse (plus there's transit time between system memory and VRAM, unless the dGPU can directly access system memory). The only ideal solution would be to smartly figure out how to divvy up the work, but that requires extra CPU overhead and that was rarely successful with existing systems.

This is on top of getting the software developers to figure out how to use this. It's already a massive pain to figure out how to make sure what you submit to a GPU is optimized. And now you have to throw on a multi-GPU system on top of that.

The only reason why this isn't really a problem for CPUs, in terms of ARM's big.LITTLE or Intel's hybrid approach, is because most of the things we run on the CPU aren't a real-time problem. We'd like sooner completion times, but we don't need things to be done by a deadline. And in fact, a lot of modern OSes will delay things from happening just enough to do everything at once to increase power efficiency.

The closest thing we'll be getting to a multi-GPU system though is AMD's chiplets with the RX 7000, but that's not really the same thing since I'm pretty sure it'll be laid out like Threadripper and Epyc.
 
The GPU venders have pretty much dropped support for their own implimentations, and that makes sense as newer API's all feature direct hardware access, which breaks any attempt at driver layer abstraction. This doesn't mean it won't work, just that the GPU drivers won't do it for you. Software venders need to build their own support for handling multiple GPUs, which the CAD / CAM or simulation guys might do it with OpenGL but don't expect any gaming engines to do this.