• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

[SOLVED] Understanding PCI-E Lane assignment

Herr B

Reputable
May 29, 2020
180
36
4,690
Probably, there are some questions out there regarding this topic already but I'm searching for hours now and have not fully found how it all works.

Background: I am building a server like pc with a lot of expansions slots. Therefore the PCI-E Lane assignment is important.

What have I found out?
  • normal ryzen processors have the following lanes:
    • 16 for gpu / pcie
    • 4 possibly for nvme drive
    • 4 for chipset communication
  • The chipset acts as a switch/router and provides additional Lanes. In the case of x570 chipset, a total of 16 lanes are running through the 4 Lane cpu interface. This makes a total of 36 available lanes.
  • The lanes of the chipset share their bandwith for m.2 slots, usb, and other devices
System Information
now comes the part where I am not sure. For reference and better clarity, a reference picture of the motherboard I looked at (3 pcie-16 + 3 pcie-4 + 2 nvme ssd)
awonevlhi0tlzzvl_setting_fff_1_90_end_1000.png

assumptions of the system:
  • Processor is ryzen 3600
  • 2 nvme drives are inserted into the dedicated nvme slots (1 one running on the 4x pcie lanes to cpu, one running over sata interface)
Assumptions, questions and caviats
  • The manual states: one x_16 gpu may be used, or two x_8 gpus or three x_4 gpus. From this statement, I conclude that all 3 big PXIEX_16 slots share the 16 lanes to the CPU.
    Given I would insert a 4xNvme pcie 16 adapter card into slot PCIEX_16_1, the two lower pcie slots do not have any lanes left to populate and would be unusable.
  • I assume the small PCIEX_4 slots are beeing populated from the Chipset. The second NVME already takes 4 lanes, sata + USB maybe 4 as well and so we are already coming to the issue that 8 of those 16 possible Lanes are beeing used already. We have 8 Lanes left for 3 slots... BUT: the amd reference sheet lists 44 Lanes total/36 lanes usable. Does that mean, my assumed 8 lanes are beeing taken from the 44 total lanes, so that I can use up a total of 36 leftover lanes?
Goal and presumed issue with my system:
With those two assumptions above, I would assume that:
  • I mount a x16 card in slot PCIEX_16_1 in 4x4x4x4 bifurcation mode, populate it with 4 ssd's
  • I insert 2 SSDs into the motherboards NVMe Slots
  • I populate min 4 HDD's via sata
-> I will have a mere two PCIEX_4 slots available for expansion. I might have the possibility to run 1 gpu via pcie 4x riser + 1 sata expansion card for 10 more sata drives.

Is this correct?
What would be the possibility to extend that pcie capability on a budget?
 
Solution
so reading this, I may have the nvme card installed in first slot + 1 nvme drive (cpu lanes maxed out)
+ 1 nvme on board chipset and I'm still capable to run the 3 PCIE 1 lanes + the bottom pcie 16 in 1x mode at least.
Huh?
Asus Prime X570-Pro has 2 onboard M.2 slots. Two M.2 drives go in there.
Your PCIE M.2 4x4 adapter goes into first PCIE_x16_1 slot. 4 additional NVME M.2 drives go in there.
PCIE_x16_2 slot can not be used.

You have left:
3x PCIE x1 slots.
PCIE_x16_3 slot (x4 operation mode) at bottom of the board.
In order help alleviate any further assumptions, it's best if you can state what the server's duties will be. That might help alleviate parts choices and minimize wastage of resources(funds and time). You can't look for more lanes on said PCIe x16 slots unless you move to a platform like Threadripper/TRX40. As for what you have, perhaps cut down on using SSD's on the PCIe slots and have them on the SATA ports, yes you loose performance but you get more lanes. As for your allocation of lanes, might want to mention or link the PCIe SSD adapter you're looking at and the GPU you want to pair with the platform since the Ryzen 5 3600 does not have an iGPU.

In a nutshell, 16 lanes will be fed from the CPU, the other lanes will be fed from the chipset. That's why I asked what the entire server will be asked to do and what the entire server's specs will be, since lanes might be allocated to other devices outside of just your PCIe slots.
 
  • Like
Reactions: Herr B
In order help alleviate any further assumptions, it's best if you can state what the server's duties will be. That might help alleviate parts choices and minimize wastage of resources(funds and time). You can't look for more lanes on said PCIe x16 slots unless you move to a platform like Threadripper/TRX40. As for what you have, perhaps cut down on using SSD's on the PCIe slots and have them on the SATA ports, yes you loose performance but you get more lanes. As for your allocation of lanes, might want to mention or link the PCIe SSD adapter you're looking at and the GPU you want to pair with the platform since the Ryzen 5 3600 does not have an iGPU.

In a nutshell, 16 lanes will be fed from the CPU, the other lanes will be fed from the chipset. That's why I asked what the entire server will be asked to do and what the entire server's specs will be, since lanes might be allocated to other devices outside of just your PCIe slots.
First of all, thank your for the insight. There was already a lot of information included.

I know that I will not have any issue with available PCIE lanes, once I put that 4x adapter aside. I would like to use it for a big nvme x16 raid-0 for maximum sustained throughput.
So, If I understand you correctly, my assumption is right: If I put an x16 card in the top most X16 slot (running in x16 mode) my two other x_16 slots are dead entirely.

It is not so easy as to write down the specific usage/parts as I am trying to gain a general understanding for my ongoing future in building my (quirky) systems.
I'm spoiling more and more towards Datascience/Big data but try to utilize max bang for my limited buck. This usually excludes Server parts.
I know, what I try to do is by far not what the systems are intended to do and are normally residing in Epyc/Threadripper terrain. It is more or less for Educational purposes and for tinkering around.

Basically what I'm trying to do:
Figure out the maximum capability of a "normal" cpu/mainboard combination.
-> Measured in occupiable x4 pcie slots with one x16 card installed. Without it should not be an issue as far as I know.

What should the system achieve:
  • Massive File storage
  • Video & 3d rendering (optional)
  • 2-3 GPU cluster for Machine Learning
  • Huge Temporary storage with a very high sustained throughput for parallel list reading
Hardware I'm going to use:
why this specific hardware? It's components which I already have at home.
 
Last edited:
well
if u need to expand pcie lanes, than u need PLX host card which does what chipset does, pcie lanes bridge
but they aint cheap
As far as I know a PLX host card will love to use the 16 lanes for full bandwith no? Would be the same result als the 4x splitter card with using the motherboards bifurcation.

additionally such a card easily costs in the range of a threadripper itself.
 
Last edited:
assumptions of the system:
  • 2 nvme drives are inserted into the dedicated nvme slots (1 one running on the 4x pcie lanes to cpu, one running over sata interface)
NVME drives run over PCIE not SATA.

Assumptions, questions and caviats
  • The manual states: one x_16 gpu may be used, or two x_8 gpus or three x_4 gpus. From this statement, I conclude that all 3 big PXIEX_16 slots share the 16 lanes to the CPU.
Not exactly.
1st 2 PCIE x16 share 16 cpu lanes. They operate either in 16x/0x or 8x/8x mode.
3rd x16 slot is chipset connected and operates in x4 mode. It's independent from first 2 PCIE x16 slots.

  • Given I would insert a 4xNvme pcie 16 adapter card into slot PCIEX_16_1, the two lower pcie slots do not have any lanes left to populate and would be unusable.
2nd x16 slot can not be used (or they both switch into x8/x8 operation).
3rd x16 slot (4x operation) can still be used.
  • I assume the small PCIEX_4 slots are beeing populated from the Chipset.
Small slots are PCIE x1 (not x4). And yes, they are chipset connected.

Goal and presumed issue with my system:
With those two assumptions above, I would assume that:
  • I mount a x16 card in slot PCIEX_16_1 in 4x4x4x4 bifurcation mode, populate it with 4 ssd's
  • I insert 2 SSDs into the motherboards NVMe Slots
  • I populate min 4 HDD's via sata
-> I will have a mere two PCIEX_4 slots available for expansion. I might have the possibility to run 1 gpu via pcie 4x riser + 1 sata expansion card for 10 more sata drives.
Is this correct?
You'll have 3rd PCIE x16 slot (x4 operation) and 3 PCIE x1 slots available.

NOTE. All of this is based on Asus Prime X570-Pro specification. On a different x570 board PCIE lanes can be organized differently.
 
Last edited:
NOTE. All of this is based on Asus Prime X570-Pro specification. On a different x570 board PCIE lanes can be organized differently.
Thank you for that great reply. Would you have a reference to the specification? I'm reading the owners manual from the webpage as well as bios manual and googling but have not found such specs

Small slots are PCIE x1 (not x4).
-> very nice :)

so reading this, I may have the nvme card installed in first slot + 1 nvme drive (cpu lanes maxed out)
+ 1 nvme on board chipset and I'm still capable to run the 3 PCIE 1 lanes + the bottom pcie 16 in 1x mode at least.
 
so reading this, I may have the nvme card installed in first slot + 1 nvme drive (cpu lanes maxed out)
+ 1 nvme on board chipset and I'm still capable to run the 3 PCIE 1 lanes + the bottom pcie 16 in 1x mode at least.
Huh?
Asus Prime X570-Pro has 2 onboard M.2 slots. Two M.2 drives go in there.
Your PCIE M.2 4x4 adapter goes into first PCIE_x16_1 slot. 4 additional NVME M.2 drives go in there.
PCIE_x16_2 slot can not be used.

You have left:
3x PCIE x1 slots.
PCIE_x16_3 slot (x4 operation mode) at bottom of the board.
 
  • Like
Reactions: Herr B
Solution