Motherboards with Socket 2011(-3) and 40 lanes bandwidth limitations?

Perig

Commendable
Jul 7, 2016
2
0
1,510
Hi,

First time posting here but an avid reader for many years. Thanks for all the solutions I found solutions for here...

Anyway, I am building a professional system that requires to have the following:

  • ■ GPU 16x such as M4000 or the new Nvidia GTX1080
    ■ Pro SDI playback PCIe card (requires 2.0 8x but 3.0 is good for future upgrade and all motherboards are 3.0 anyway)
    ■ 5+GB/s SSD RAID0 using at least 2 Samsung 950 Pro (require each 3.0 x4) for raw video read

So total I would need at least 4 PCIE slots and 32 PCIe lanes on the CPU or CPU+Chipset.

I currently have a Z170 Extreme 7+ with a i7-6700K which has 16 lanes on the CPU and 20 lanes on the Z170 chipset bus.

My limitation is in the fast that to get the 5GB/s on the SSD RAID0 I have to use one drive on an on-board M.2 PCIe slot and one on a PCIe carrier card that will be put on a PCIe slot from the CPU bus taking away 4x lanes (if using 2 on the M.2 on-boards slots I run into the Z170 bus limitation of 3.2GB/s). This means that the PCIe lanes default to (in the best configuration) 8/4/4 configuration which is now too slow for my SDI card.

I found several 40 lanes CPU and motherboards such as the Intel E5-1650 v4 or Intel Core i7-6850K on the processors side and the X99 OC Formula/3.1 for example on the motherboards side.

Question 1: Any recommendation on the best motherboard for this application?

Question 2: Are there limitation on the CPU bus PCIe bandwidth like there is on the Z170 chipset or could I possibly run into bandwidth limitations in let's say a max configuration such as 1 GPU 16x and 4x SSD Samsung Pro (2.5GB/s) for a total of GPU bandwidth + 10 GB/s of SSD?
 
The lanes come primarily from your CPU when we're talking about the X99 platform. The motherboard only helps distribute aforementioned lanes to the slots/devices for optimal operation so the CPU with the highest amount of lanes at your disposal is what you need since the board cannot generate lanes for you unless you end up placing a PLX chip to add more lanes(but more latency as well). The latter chips are higher core counts, cache and clock speed but have the highest amount of lanes to any consumer.

Further reading.

- 5 x PCI Express 3.0 x16 Slots (PCIE1/PCIE2/PCIE3/PCIE4/PCIE5: single at x16 (PCIE1); dual at x16 (PCIE1) / x16 (PCIE4); triple at x8 (PCIE1) / x8 (PCIE2) / x16 (PCIE4); quad at x8 (PCIE1) / x8 (PCIE2) / x8 (PCIE4) / x8 (PCIE5))*
- 1 x Half-size Mini-PCI Express Slot
- Supports AMD Quad CrossFireX™, 4-Way CrossFireX™, 3-Way CrossFireX™ and CrossFireX™**
- Supports NVIDIA® Quad SLI™, 4-Way SLI™, 3-Way SLI™ and SLI™****

* If you install CPU with 28 lanes, PCIE1/PCIE2/PCIE3/PCIE4/PCIE5 will run at x16/x0/x4/x8/x0 or x8/x8/x4/x8/x0, and PCIE5 will be disabled.

**To support 3-Way CrossFireX™ and 3-Way SLI™ when using CPU with 28 lanes, please install VGA cards to PCIE1/PCIE2/PCIE4 (x8/x8/x8).

*** If Ultra M.2 PCI Express module is installed, PCIE3 slot will be disabled.

**** If you install CPU with 28 lanes, 4-Way CrossFireX™ and 4-Way SLI™ are not supported. To support 4-Way CrossFireX™ and 4-Way SLI™, please install the CPU with 40 lanes.
^pulled from the specifications page for your chosen motherboard.

Don't go for RAID 0 Arrays since a production machine that has one will encounter the worst form of down time should the array get corrupt(often happens with folks on Windows 10) and although it may help boost your productivity in terms of speed if and when they go down the time needed to repair the array and ofc for loosing all your data costs the same as having one dedicated SSD as the boot and apps drive with no array in place(if this system is to be seen as an income generator)albeit with less headache and worry.
 
I do video work and appreciate your issues.
The CPU lanes (i.e. 40) is the limit the cpu can handle. Make sure your motherboard can support that many lanes to get the max cpu benefit.

The number of physical PCIe slots is a separate, physical limit, so make sure it supports the PCIe devices you want to insert. It seems like a 40-lane chip is a no-brainer for you.

I currently run a 3 disk set up for production work:
1) raid0 2x256GB SSDs for OS/Apps and have not had a problem.
2) raid0 2x1.5TB HDD for project files
3) 1 x 128GB SSD dedicated render disk (I would use 2 in raid 0 if my rig would support it.)
I archive/offload renders and project files at each project stage to external HDD and DVD
I also run bit-torrent sync on my internal network to an old laptop. I simply open the laptop (waking it up) and automatically backs up my project files. The program is free, fast, trouble free, and won't run if the other machine is off so it won't interfere when you are working or rendering.)
I only need/use one video card (GTX980)

In 7 years, I've lost the OS raid0 once when I was using HDDs in raid0. Took 1 day to repair. Sure it was a pain but it is absolutely worth it. It's so much faster.

I plan on building a new rig that takes advantage of faster cpu, more cores, higher clock, more lanes, and PCIe storage:
1) Intel 750 PCIe storage for OS/Apps
2) HDD for project files
3) Intel 750 PCIe or Raid0 2-3x128GB SSD dedicated Render (750 likely)
40-lane cpu like 6950x or similar

I will often pre-render image sequences from one app and then use them to render a final composite so my workflow is faster if I don't have to move pre-renders to a read disk before final render so I'm thinking I'll just do an Intel 750 for my render disk and accept that I might reach an i/o limit when reading those pre-renders, but I think it will still be much faster than what I currently do.

Hope this helps.