• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

[SOLVED] What to Look for to Have Enough PCIe Lanes for My Case? (for preliminary research)

Sp3ctre18

Reputable
Mar 1, 2017
12
1
4,525
I'm starting to consider building a new PC partly to get past some current issues and bring it more up-to-date for the SSD world we're in now.

However, at least for my Skylake-era PC, I've found that PCI lanes became a concern I never had before. While in previous PCs I'd have fun adding expansion cards for anything from extra USB or SATA ports, soundcard, TV card, etc., this PC can barely add anything (Gigabyte z170 Gaming 7 mobo) in numbers - I know that in terms of lane usage I added a lot.

context:
Using both M.2 slots disables the 3rd 16x, using the 2nd 16x reduces both it and the 1st to 8x, and other slots are mere 1x's, meaning my SATA expansion card that is JUST slightly too big (2x??) must go in a 16x slot, so I'm out of options without a riser! Oh yeah, and I needed that SATA card because the 1st(?) M.2 disables some of my SATA ports. UGH.

Have things developed enough for CPUs and MOBOs to have enough lanes and not having this frustrating lane-juggling concern? Like maybe I don't get the tech (or it's badly worded), but the 16 to 8x reduction JUST for having something in a slot sounds silly to me. Why not just reduce by the amount of lanes actually being used?

Anyway, here's what I'm hoping for the future:
  • Zero or minimal lane juggling, i.e., ideally, no disabling of SATA or PCIe ports because of M.2 drives, and no 16x being turned 8x just because ANOTHER slot has a 1x or 2x in it. Basically like, if I see a port, I expect to be able to use it. Simple. No math games, please, lol.
  • Adequate lanes for: GPU, 2nd GPU just for mining (ie, could run on a 1x), 2+ M.2 drives (is 3-4 realistic? Heard of a $800 Mobo with 5, wow), about 4 SATA drives, and for future expansion, maybe a soundcard and whatever random need or new tech come up.
So what sort of minimums am I looking at here?
Like, minimum lane count, brand and series of CPU, price, and related info about Mobos (I have no idea how chipsets and their lanes to the CPU vary, or if I only have to look at the CPU specs).

Thanks!
 
Solution
and related info about Mobos (I have no idea how chipsets and their lanes to the CPU vary
To add to all of this, there are 2 types of PCI-E Lanes

5TtvIwm.png

One good way to see is to examine your motherboard's chipset info graph to see which PCI-E lanes are connected to what devices.
The design of this is not set in stone and can be changed in the future (moving onto future chipsets). This is the info graphic for the old Z170 Chipset back in 2015.

A.) CPU PCI-E Lanes as discussed by everyone above me
B.) Chipset lanes

CPU PCI-E Lanes are much faster in speed compared to chipset lanes.

Looking at this sample info graph
-We can see that this is an Intel system, a Z-series board chipset...
The problem is you're overestimating the precision that can be done with the physical wiring.

And for most things, the PCIE speed doesn't really matter. We're still at the point where a slot going to x8 just barely has an effect on the highest-end GPUs.

Adding lanes increases the physical complexity. It means larger sockets with more traces on motherboards and more complicated chipsets which means significantly higher costs for something that the vast majority of consumers don't need and certainly don't want to pay for.

What you're asking is the equivalent of wondering why every car can't go from 0-60 in five seconds. Every automaker could choose to do this if they wanted. But there's a significant, real cost to it, and it's something that most people aren't willing to pay for, else they'd buy the sports car in the first place.
 
For that you would need to step up to a workstation platform.
There are not any consumer/ home PC platforms with that many PCIE lanes available. They are limited to what the CPU has available.
Consumer Ryzen CPUs have 24 PCIE lanes.
4 go to the chipset, 16 for video and 4 for CPU connected NVME on most motherboards.
To get more PCIE lanes you would need to step up to a Threadripper workstation board. Which can have up to 128 PCIE lanes. Depending on CPU model.
Intel has not had HEDT CPUs for a couple generations now so a Xeon/server grade platform would be needed on the Intel side of the equation.
 
What I wondered was why I used to fully load my car and reach 60mph in 5 seconds while this newer one can't. Thought it was a space (lanes) issue, but it seems it's more about the weight of the stuff I'm putting in.

Good to know about the GPU lanes. I don't run the latest and I'd fine staying with my Vega 56.


So I guess it's NVMe that came around and is still eating into our PCIe lanes, huh? I can mount 4-6 SATA drives on what I think is a 2x PCIe, but 1-2 NVMe drives will want 4 whole lanes?

EDIT: And since my board was disabling stuff with any M.2 in use, I guess this problem won't exist with newer CPUs and boards since M.2s will now have dedicated lanes? How many M.2 slots usually get assigned to those 4 lanes?

And yeah, workstation CPUs look way too up there. I may be open to CPUs of a few hundred $, but not the several hundred+. It's cool how many lanes they can support though!
 
Last edited:
Most high performance NVME drives need 4 PCIE lanes for best performance. They will run on 2 lanes but speed will be about half that of 4 lanes.
For most people the real world difference between A SSD and NVME is negligible.
A second or 2 less boot time and a few seconds less time loading games.
Consumer software just does not push them with a high enough query depth to make a difference.
 
and related info about Mobos (I have no idea how chipsets and their lanes to the CPU vary
To add to all of this, there are 2 types of PCI-E Lanes

5TtvIwm.png

One good way to see is to examine your motherboard's chipset info graph to see which PCI-E lanes are connected to what devices.
The design of this is not set in stone and can be changed in the future (moving onto future chipsets). This is the info graphic for the old Z170 Chipset back in 2015.

A.) CPU PCI-E Lanes as discussed by everyone above me
B.) Chipset lanes

CPU PCI-E Lanes are much faster in speed compared to chipset lanes.

Looking at this sample info graph
-We can see that this is an Intel system, a Z-series board chipset
-CPU PCI-E lanes support your GPU (x16), DDR4 RAM, and 3x Independent displays support.
-Chipset lane connects towards USB 3.0 ports, your ethernet connection, your 6x SATA ports, Intel Rapid Storage Technology, Intel Smart Sound Technology, Intel ME 11 Firmware BIOS support, etc.

An M.2 NVME SSD will use Chipset lanes (specifically the south bridge of the chipset) in this case.
*Note this is not a set in stone design because it depends on the manufacturer, there may be other cases where the NVME SSD will use CPU PCI-E lanes instead of Chipset lanes.

Chipset = Northbridge + South bridge

The main difference between northbridge and southbridge is that the northbridge is a chip in the chipset of a motherboard that directly connects to the CPU while the southbridge is a chip in the chipset of a motherboard that does not directly connects to the CPU.

Take a look at this 2nd info graphic, this time for an AMD with X570 chipset

PKd9yzr.png

(Had to enlarge the image if it isn't clear)

We can see that the NVMe M.2 SSD is connected to the CPU via CPU PCI-E Lanes and that it uses 4x CPU PCI-E Lanes to do so.
AMD is giving you an option to "Pick one"

Either use:
-Only 1 NVME M.2 SSD so it fully utilizes those 4x CPU PCI-E Lanes. This is the typical setup to go for and most users with a NVMe M.2 drive will want to fully utilize its speed.

-Or use 1x NVME M.2 SSD but use 2 CPU PCI-E Lanes and use some SATA port that has 2x CPU PCI-E lanes that connect to the CPU. The latter is probably some special SATA port in the motherboard that directly connects to the CPU. It will be indicated in the motherboard manual. This latter case will cut down the speed of the NVMe M.2 SSD by the motherboard. Notice that in this option we are now only using 2x CPU PCI-E lanes for the NVMe SSD instead of the original 4, so its performance will also be cut down by that factor.

We can also see that 4x USB 3.2 Gen2 and DDR4-3200 ram is directly connected to the CPU via the CPU PCI-E Lanes.

The rest of the components are connected via the chipset.
An easy adage that I use is: "If a device is not connected to the CPU, it must be connected to the chipset".
Like i.e. we don't see where ethernet in this case is connected to, but it is most likely connected to the chipset because ethernet doesn't need as much bandwidth to warrant using a CPU PCI-E Lane.
 
Last edited:
  • Like
Reactions: Unolocogringo
Solution
Have things developed enough for CPUs and MOBOs to have enough lanes and not having this frustrating lane-juggling concern? Like maybe I don't get the tech (or it's badly worded), but the 16 to 8x reduction JUST for having something in a slot sounds silly to me. Why not just reduce by the amount of lanes actually being used?
The way most serial communication systems work is that the bits that come in fill in a shift register and then pick up all the bits at a certain point. You can think of it as a horse race where the horses come out one by one to fill the gate, but then are released at once.

This becomes a problem when the data flow is not coming in at a power of two. For instance, say you have a 16-lane card and a 1-lane card and you split them 15 and 1. Well now the 15 lane card has a problem: it can't transfer two bytes at once (assuming 8 lanes allows you to transfer a byte at once). So it can do one of two things:
  • Wait another transfer for that single bit to come in, wasting the other 14 lanes it had
  • Continue on anyway, but the hardware has to internally handle "misaligned" bits. i.e., Lane 0 used to be bit 0 on this transfer, but now it's bit 15, then bit 14, 13, etc.
So you're left with a situation where you have idling resources or you have to have make your transceivers more complicated to handle misaligned data. Of course you could always just say a lane fills in a buffer and then pulls out a byte every 8 transfers, but most transfers also like to be in powers of 2. So you'll end up with something similar.

  • Zero or minimal lane juggling, i.e., ideally, no disabling of SATA or PCIe ports because of M.2 drives, and no 16x being turned 8x just because ANOTHER slot has a 1x or 2x in it. Basically like, if I see a port, I expect to be able to use it. Simple. No math games, please, lol.
  • Adequate lanes for: GPU, 2nd GPU just for mining (ie, could run on a 1x), 2+ M.2 drives (is 3-4 realistic? Heard of a $800 Mobo with 5, wow), about 4 SATA drives, and for future expansion, maybe a soundcard and whatever random need or new tech come up.
The thing is, like it or not, you're an outlier. Most people don't bother with putting anything more than 2-3 storage drives and a single graphics card. Trying to cater to the so-called "power user" that must fill every port and slot on mainstream boards is wasteful because the mainstream doesn't fill up every single thing. The HEDT space is different and you can usually achieve what you want, but you're also paying out the wazoo for it.