SLI/CF GPU recognition as individual devices

haristo

Commendable
Apr 14, 2016
3
0
1,510
Hello

I am new to the world of GPUs and am looking to boost scientific computations in a workstation (HP Z840). Hence my primary interest is in high FLOPS in double precision. Therefore my interest lies with NVIDIA cards of the type GEForce GTX Titan, GEForce GTX Titan Black, GEForce GTX Titan Z, Accelerator 40 or from AMD the older RADEON HD 8990 or the FirePro W8100, FirePro W9100.

I have an important question regarding how SLI/CF single GPU cards, or one double GPU card (i.e. Titan Z), is generally recognised by the operating system.

a) Are SLI/CF combined cards recognised as 1 device or still as 2?
b) Are dual GPU cards recognised as 1 device or 2?

any comments are much appreciated.
 
Solution
The cards won't ever actually function as a single device. It would be revolutionary if they could without some huge overhead performance cost, but each card has its own memory and one card can't just read from the other's memory. Perhaps the new NV link technology will help.
SLI is only for rendering not CUDA. The CUDA driver should be able to handle as many different CUDA devices as you have connected. It will need to split/send the relevant data to each card's memory for processing. Any communication between the cards will need to travel via PCI-E through the CPU. I'm not sure how much programming that requires compared to using a single card, but I do know that SLI is not the answer.
All of them are recognized as 2 devices, which is what you want. SLI/CF is only useful for gaming (unless there is some new application I haven't yet heard of). Computation apps should be able to take advantage of both chips whether they are on the same PCB or not. Make sure you have SLI/CF disabled.
 

haristo

Commendable
Apr 14, 2016
3
0
1,510


Thank you very much. However simple this question may appear, this answer is not so easy to find (likely too obvious for people in the field).

However, is there a way of making them appear as 1 device, if I wanted to for some reason. Also while on this topic, could you enlighten me about something else. Assuming 2 NVIDIA cards which are SLI bridged and enabled. Once the CUDA driver is invoked by an application (simulation program in my case) the CUDA driver independently distributes the loads between the bridged cards? Because in this case, the application may not influence the GPU allocations anymore?
 
The cards won't ever actually function as a single device. It would be revolutionary if they could without some huge overhead performance cost, but each card has its own memory and one card can't just read from the other's memory. Perhaps the new NV link technology will help.
SLI is only for rendering not CUDA. The CUDA driver should be able to handle as many different CUDA devices as you have connected. It will need to split/send the relevant data to each card's memory for processing. Any communication between the cards will need to travel via PCI-E through the CPU. I'm not sure how much programming that requires compared to using a single card, but I do know that SLI is not the answer.
 
Solution

haristo

Commendable
Apr 14, 2016
3
0
1,510


Straight to the point. Thank you very much for your advice, very helpful :)