[SOLVED] Do multiple PCIeX1 slot's bandwiths stack?

yt.degamendeman

Reputable
Apr 26, 2018
7
0
4,510
I have an Asus H87m-e mobo in my (maybe somewhat ghetto) homeserver currently, which has 3 unused PCIe gen2.0 x1 slots underneath the (occupied) gen3.0 x16 slot. Recently I was able to score a SWEET deal on a bunch of misordered new sata NAS grade hdd's from my job, of wich I would love to mount a good number in my homeserver. I was thinking of buying 3 pciex1 to 4xsata cards for a total of 12 drives from those slots, as gen 2.0 x1 slots have a bandwith of 500MB/s for a total of 1Gbps per drive. Now the question: does the bandwith of those x1's stack up like that, or are they on some shared bus like the old PCI slots? If all 12 drives are gonna feed of a single shared 500MB/s bus then that's obviously gonna be a bad time. Does anyone know this, or know a way to get this information, before I go ahead and order the parts needed?
Thanks a lot in advance!
Specs (homeserver):
CPU: I7-4790K
MOBO: Asus H87m-e
RAM: 4x8GB DDR3-1600
10Gbps NIC in the gen3.0x16 slot
 
Solution
I have an Asus H87m-e mobo in my (maybe somewhat ghetto) homeserver currently, which has 3 unused PCIe gen2.0 x1 slots underneath the (occupied) gen3.0 x16 slot. Recently I was able to score a SWEET deal on a bunch of misordered new sata NAS grade hdd's from my job, of wich I would love to mount a good number in my homeserver. I was thinking of buying 3 pciex1 to 4xsata cards for a total of 12 drives from those slots, as gen 2.0 x1 slots have a bandwith of 500MB/s for a total of 1Gbps per drive. Now the question: does the bandwith of those x1's stack up like that, or are they on some shared bus like the old PCI slots? If all 12 drives are gonna feed of a single shared 500MB/s bus then that's obviously gonna be a bad time. Does...
No matter what or how you configure the system, aside from a RAID array of some type, any SATA drive is only going to see a maximum of somewhere around 500MB/s regardless of whether it's a HDD or SSD, so long as it is a SATA 3.0 device. It doesn't matter what you connect it to. The device itself is the limiting factor, except where RAID is involved and that introduces it's own can of worms.
 
I have an Asus H87m-e mobo in my (maybe somewhat ghetto) homeserver currently, which has 3 unused PCIe gen2.0 x1 slots underneath the (occupied) gen3.0 x16 slot. Recently I was able to score a SWEET deal on a bunch of misordered new sata NAS grade hdd's from my job, of wich I would love to mount a good number in my homeserver. I was thinking of buying 3 pciex1 to 4xsata cards for a total of 12 drives from those slots, as gen 2.0 x1 slots have a bandwith of 500MB/s for a total of 1Gbps per drive. Now the question: does the bandwith of those x1's stack up like that, or are they on some shared bus like the old PCI slots? If all 12 drives are gonna feed of a single shared 500MB/s bus then that's obviously gonna be a bad time. Does anyone know this, or know a way to get this information, before I go ahead and order the parts needed?
Thanks a lot in advance!
Specs (homeserver):
CPU: I7-4790K
MOBO: Asus H87m-e
RAM: 4x8GB DDR3-1600
10Gbps NIC in the gen3.0x16 slot
The best way to understand that is to see a block diagram of the H87 chipset -- https://www.gamersnexus.net/guides/1132-intel-haswell-chipset-comparison (third graphic on that page).
You can see that those PCIe lanes are provided by the chipset. They are independent to the chipset then have to share the DMI 2.0 interface to the CPU which has a 2GB/s throughput.
 
I have an Asus H87m-e mobo in my (maybe somewhat ghetto) homeserver currently, which has 3 unused PCIe gen2.0 x1 slots underneath the (occupied) gen3.0 x16 slot. Recently I was able to score a SWEET deal on a bunch of misordered new sata NAS grade hdd's from my job, of wich I would love to mount a good number in my homeserver. I was thinking of buying 3 pciex1 to 4xsata cards for a total of 12 drives from those slots, as gen 2.0 x1 slots have a bandwith of 500MB/s for a total of 1Gbps per drive. Now the question: does the bandwith of those x1's stack up like that, or are they on some shared bus like the old PCI slots? If all 12 drives are gonna feed of a single shared 500MB/s bus then that's obviously gonna be a bad time. Does anyone know this, or know a way to get this information, before I go ahead and order the parts needed?
Thanks a lot in advance!
Specs (homeserver):
CPU: I7-4790K
MOBO: Asus H87m-e
RAM: 4x8GB DDR3-1600
10Gbps NIC in the gen3.0x16 slot
PCI Express is based on point-to-point topology, with separate serial links (lanes) connecting every device to the root complex (host). Each of the lanes going to a device (or slot) is dedicated to that slot. That said, there are some motherboards that use PCIe switches to enable switching of lanes from one slot to another, either on demand through BIOS settings or a preferred slot.

Motherboard chipsets and CPU's provide a limited number of lanes, often times less than the number of PCIe slots on the board might make you think. So you'll have to also look at your motherboard's specifications to see how if it might cancel some slots if certain others are populated. But when you have the slot occupied it has the full bandwidth potential of both the device's and the host's capabilities, all the time, up to the number of lanes wired to that slot.
 
Last edited:
Solution