Can I run video cards on a 4x PCIe to increase CAD or Video Editing performance? Also ideas what to do with a old server?

Jan 27, 2019
So I have a server I just got (2x cpu 3GHz 4c4t 12mb each with 64 GB ECC Buffered) and I'm wondering what I can do with it.

It has 6 PCIe 4x Lanes
Can I install graphics cards (quadros etc) into it and use them to increase video editing speeds or CAD work? (I do both.)
Will 4x be too slow? Or would it just be better to us a vanilla computer with TWO 16x video cards installed?

NOTES: As for power I think it can handle it. The cpu's say they use 150 watts each, but it has a 1,000 watt psu (and i can put in a second redundant PSU.) The onboard video is a ATI ES1000 32MB.

While at first I was the most excited geeky girl on earth, now that I think about it I'm not 100% what else I can use it for. So any other ideas or thoughts for me would be appreciated. I'm not 100% sure what I want to do with it. I do electronic design, programming, youtube videos, CAD work (3D print/cnc milling), and I'm a gamer too!

Thank you all in advance for any ideas!
May 2, 2019
First, you have to recognize that a standard install, especially of a server os on such a system will lead to full throttle operating and it may grant you a monthly bill worth third or half a year; meaning: Make sure you run the machine as energy-efficient as possible bios-, os- and application- -wise. - These behemoths often do support a limited acpi-set if at all. Configured power-efficient they will eat up electricity comparable to a gaming-pc, with the benefit of for example having a second redundant powersupply, fully-buffered ecc-dimms and more slots than a regular pc or workstation, often 2 or 4 and up to 8 sockets for CPU and a cooling system optimized for put-the-pedal-to-the-metal-round-the-clock-operating-mode. - Just as a reminder, what you actually got there ...

On to the main question:

YES, you can. & NO, you don't.

YES, you can drive any PCI-Express graphics-card that is of the same or a lower revision of the PCIe-bridge that is given. For example if you have a PCIe bridge 2.0, you can run also PCIe 1.1 and 1.01 (which is the first mass-produced version).

If the card you want to fit in supports running on a lower revision, e.g. is a PCIe 3, but supports running on PCIe 2, well yeah, should be clear, it works.


NO, you do not install a standard (16x PCIe 2 | 3) graphics card into a server, simply because servers do not support 16x-slots, if it does it is a workstation, or combination of both, which is very rare.

Nonetheless you have 3 options, to actually make the card fit.

Option 1:
... Most unmessy (in theory) - using Riser- | Adapter- -combos.

For example search for 'riser-set for mining' and you will find cost-effective riser-sets, mostly 16x-to-4x | 8x-to-4x or even 16x-to-1x | 8x-to-1x | 4x-to-1x, which essentially means that you get a splitter from a 16x | 8x | 4x down to 2 x 8x | 4 x 4x 2 x 4x | 16 x 1x. - 16 x 1x, would be the perfect choice to run 16 x asics-cards for mining on a 16x-PCIe-slot or 16 x 10Gb-LAN/FC to feed or synchronize with a cluster, e.g. for virtualization.

Problem with Option 1:

You effectively decouple the cards to run from the slots, meaning you can not install them in place, you have to find or create a slot adapter that allows you to install the card(s) in a fixed position, so they don't do do-wah-diddy-diddy-dumb-diddy-pooooof .. finishing with a BBBbbbzzzzz ... signaling the final Bye-Bye of the whole machine. - Yes it happens. Most often if you have a bunch of them, drink some liquor and make it your target to find ANY WAY to get the max out of those monsters. In that situation - just for testing of course ... - it may come to a point in time and space where you are not really up to the game focus-wise, due to the spiritual hydration technique used and you find yourself abruptly focusing on the dazzling sound of electricity, that funky sweet smell .. and next day you wonder who the fracking hell of an idiot hotglues a passively cooled Quadro K to the case ...

IRL-joking aside, it can be a daunting task to finalize the mission via option 1 aka riser-+-adapter-puzzle. If you have a 3D-printer, which i assume by what you wrote, you may be up to the task.

Option 2:
... Cut the card.

Yes, seriously. Take a Dremel | MicroDrill and cut the card where your slot ends. To make sure you cut at the right point also hail out for the PCIe specifics, in this case the pin-setup, e.g. on the wikipedia-page for PCI Express.


Most likely you have to bridge a specific pin-pair, to know which exactly look for the PRSNT-pins which are actually Pin 1 (Front- | A- -Side) bridged to the PRSNT2-pin on the Lane you want to force to be operated, e.g. Pin 17 on the Recto- | B- -Side, which is the second last pin of a 1x-Slot.

You can use 1.25mm up to 2.25mm solid copper wire to bridge the pins; you have to try which fits nicely in the bridging-holes of your specific bus-slot. I go with 1.75mm to 2mm in most cases, to ensure it sits tight and has a proper isolation and is wide enough to run even a max-limit-currrent-sucker stable.

This last mentioned - the max-limit-current-sucker alias those GPU-monsters that run steadily on the limit of the specification, without demanding a separate power-connector - for those be aware that the lesser lanes you have, the lower the limits are. So don't try to run a 95W card on a PCI-1x 25W slot. - I mean it may work, it could work. If you are adventurous? ... fine! - But just be aware and it might help in decision-making aforehand, to study what actually does happen if you do.

Problem with Option 2:

None. At least if you do precisely what is needed and are sure of the limits you have. Also you have to trust yourself if you never handled such stuff. Count the pins six times if necessary. Use a magnifier to ensure your bridge sits T I G H T. Whatever you need to do, to fully be pleased on terms of nope, nothing will go wrong: Just do it.

Option 3:
... Cut a notch into the slot.

Actually, like option 2, but instead of shorten the card physically, you enable insertion of a bigger card into a smaller slot. This will work if you actually have the space needed behind your 1x | 4x | 8x slot. This for sure gives you the opportunity to easily test a broad range of standard graphics cards quickly if and how they can be persuaded to work nicely.

For cutting you should install a dummy-card or equivalent, so you can not cut the pins accidently. Asides that you just need a portion of patience and a steady hand and senses focused on the task.

You for sure have to bridge the PRSNT-pins none the less.


So what will it be? That is all there is to it:

- Check the power requirement of the card and compare it to the given revision and limits.

- Make sure you have a PCIe-version-combination that is supported, or do the experiment nonetheless, as long as you are sure about the consumption and provision of power.

- Decide, to play the costy and finicky game of Riser-and-Adapter-Puzzle or to cut either the card or vice versa the slot.


Hope it helps. Thanks for reading.


P.s.: Yes, i think it is viable to run such a machine, as i regularly do, because i run a docker-hub via rancher-os, a proxmox 3-node, diverse hyper-v and windows server and desktop os with Client- Hyper-V enabled, SculptOS, Oracle VM Server, diverse customized web-server-setups and whatever is needed at a time additionally normally as virtual machines on a Xen-Linux Setup, with nested virtualization enabled and passthrough of cut Quadro K2000 on 8x and 4x and non-cut Quadro K2000 on notched 4x slots.

Actually i am working on a setup for an older machine without UEFI and VT-d EPT aka SLAT (Second Level Address translation) to test if Windows Server 2012 and R2, both in Desktop-mode can be pleased to run Client-Hyper-V with nested Virtualization, so to say, to test the perspective of Windows Server as a concurrent basis setup. - Before i used that machine to fiddle with ZFS RAID Setups on Linux with kvm and FreeBSD with bhyve and also OpenIndiana and SmartOS and other Illumos-based setups; run VSphere | esxi, as also desktop os setups with 2nd-level hypervisors and mostly unknown wonderchilds in the virtualization arena, most neither have heard of and likely never will.



The Quadros are used for mining if nothing has to be rendered, meaning if i am not using the machine as a workstation to get into 3D and GameDesign again, mostly to become friends with Blender 2.80 and Unreal Engine 4, since these are the ones that will change the near future, at least in my opinion. Asides that to run full-fledged Visual Studio with all Assets possible, mainly out of interest, but also i am digging into UWP and Windows on ARM64, now that some fresh people have made it possible to run it in main aspects full-fledged on the Lumia 950 XL as a dual-boot to the soon to be shut down Windows 10 Mobile, which is the most stupid thing Microsoft could have done and did since, well .. since what they did to the future of Windows, when instead releasing the totally crippled version of that future as Windows Vista. I am no fanboy. I tolerate fanboys independent of gender or wishful thinking, but i hate Apple from the deepest depths of my heart, because it is the personified evil, the imperial troop, since day one .. okay day two .. since 'A new hope', without the latter i think i would name that industrialized thievery simply Satan. - So yeah, just for the sake of getting insights in what Apple actually has stolen again i build and test hackintoshs from time to time and disassemble, reverse engineer and refactor their Closed Source contingent, only to prove myself over and over that they are still doing the same s$"("§/$))="/$§(&%"$§("$"§$"5

... Sorry, i drifted away ...

Well that's what i use those kind of machines for. Mostly. What ran through your mind in the meanwhile regarding purposing and possibilities, such Hulk can provide?
Last edited: