Surround, 5 monitors, 3 in SLI

thepregnantgod

Distinguished
Jan 12, 2010
261
0
18,780
I have 5 monitors. I want to enable Nvidia Surround on 3 (SLI) while using the remaining 2 as VMware Workstations.

Is this possible?

If so, how would I do it on 2 GTX Titan Xs?
 
Solution
If you read the whole article, you'd find some answers to your questions. Mainly, you need a GPU that supports virtualization and PCIe passthrough, which not all do. At the time of that writing ( two years ago ), the only NVidia cards that supported vGPU were Quadro and GRID cards. If that hasn't changed, then you have to go with an AMD or a workstation card ( the latter of which usually don't have gaming driver optimizations ). Also, that's not "running off their main rig" as you say, that's setting up the entire computer as a VM server. You'd be running off a VM as well as your sons. Finally, that system was running off a Xeon E5, a 12C/24T $2400 chip.
MERGED QUESTION
Question from thepregnantgod : "Multi-seat, ASTER, SLI"







 
In the future, use tags to broaden the applicability of a thread. Our rules require that you ask a question a single time, in a single forum section.

What you really want is to be able to run three sessions with different users, simultaneously, from a single PC. Correct?

If so, this is going to be very challenging, even on a beastly system such as yours, unless the "secondary" seats are playing browser based games.

Have you given any though to just building the boys their own rigs and reducing the complexity of your plans?
 


Roger, will comply going forward.

The goal is to minimize the number of systems in the home. And...VMware would lock down any crap stuff the install to their virtual system which I can easily reset with an image.

But they do want to play games beyond browser games.

Assuming I don't do SLI for me, shouldn't each system be able to share the TITANS? And with 16gb of ram set aside for each with at least 2 cores/4threads...at 1080p I thought it'd work.

That's why I turn here. No reason to recreate the wheel.
 
Theoretically speaking, the following may be possible.

First, as the Colonel hinted at, a GPU does not like to simultaneously work on multiple 3D applications like games. If you've ever tried to boot up two games at once, you've likely seen an error that one game couldn't find a DX adapter, or something similar, because the first game has already monopolized it. Even if you could get all three games running on the GPU setup, you're asking it to render three independent instances simultaneously, one of which is a triple-screen view. That's a HUGE load for any GPU, even in SLI.You're pushing more pixels than a 4K screen, and 40% of them are not linked to the other 60%, which makes it harder.

The solution would be three independent graphic adapters. This would allow each one to handle its own game instance. You would need to dictate each game instance to a different adapter ( and this is tricky or impossible on some games as they automatically try to use the primary system display adapter. Next you have to contend with specifying which display the games run on, since nearly all games default to the primary system display ( though the VMs may be able to solve this one since they each would have their own primary display ). You'd also need to make sure the game you're playing on the base machine can be run in a windowed mode so as not to blank out the other screens being used. So this could solve the graphical problems, but there are still other concerns.

You'll need three keyboards and three mice plugged into the computer and the requisite software to split the input. To my knowledge, Windows doesn't offer this functionality natively, you'd need third-party software ( perhaps the VMs can be configured to recognize only one keyboard mouse pair, I've never tried doing that, but you'd still need the base computer to ignore the inputs from the other devices, except to pass them on to the VMs ).

lThe 6900K is a beefy chip, but you're trying to run three game instances, three Windows instances, and all the virtualization and other background tasks associated with that. That's a hefty load, even for an 8C/16T chip. Nw yes, you could allocate resources so that each VM has 8GB RAM and 2C/4T for itself, leaving you with 16GB RAM and 4C/8T for the base Windows instance. On paper it should work, but I can't vouch for real-world performance of that. My guess is the virtualization will take its toll on the CPU, but more on the GPUs. I think the input lag on the VM games would be noticeable.

So, if you can meet all these criteria, yes, I think it would be theoretically possible. However, it is a much more complex, difficult, and expensive solution than just getting multiple computers. If you want to use the VMs to control installed software with checkpoints and save states, it's nearly as easy to just get some imaging software and reimage the sons' computers.
 
RedJaron, thanks. Researching while you guys are answering, it seems that though my chip supports vt-d (direct mapping) for virtualization needs, the GPUs do not support passthrough. So, it is as you say, no 3-d games on the other two games.

From my understanding, though VMWare should allow me to put 2 additional keyboards and mice on the system and map them to their separate stations.

I guess I'm missing how all those other folks setup systems to run off their main rig.
 
If you read the whole article, you'd find some answers to your questions. Mainly, you need a GPU that supports virtualization and PCIe passthrough, which not all do. At the time of that writing ( two years ago ), the only NVidia cards that supported vGPU were Quadro and GRID cards. If that hasn't changed, then you have to go with an AMD or a workstation card ( the latter of which usually don't have gaming driver optimizations ). Also, that's not "running off their main rig" as you say, that's setting up the entire computer as a VM server. You'd be running off a VM as well as your sons. Finally, that system was running off a Xeon E5, a 12C/24T $2400 chip.
 
Solution
I agree with the comments that suggest going with multiple systems and backing them up to storage that you can control.

Question: do you really need 100TB of storage space as mentioned in your initial post? If you really mean that, could you get by with only 50TB?

I use an Adaptec 8805 with 8TB SATA Hitachi helium drives in RAID 5 that I picked up for only $440 each and it is very simple to set up and maintain (with a caveat that you should have everything backed up elsewhere as that is just fault tolerant storage). You need to run the Adaptec card in an 8x or 16x PCIe slot and need a lot of drive bays (I use a separate LianLi full tower case and simple i3 and inexpensive board to keep all my x16 slots free in my main machine) and it will run you a bit over $5k for parts, so it's not for those on a budget.
 
Regarding the 100TB of storage, I have about 40TB of data redundant via Drivepool over 30 drives. I tried RAID 5 years ago but with a double drive failure, I was screwed. So, this way, my 40TB of data is copied twice over those 30 drives.