Discussion Discussion: AMD PCie x4 on mainstream motherboard vs HEDT with more PCIe x3 lanes

  • Thread starter Deleted member 2783327
  • Start date
D

Deleted member 2783327

Guest
I've always stuck to HEDT systems because we run a combination of add in cards that use extra PCIe lanes. Such as Raid, thunderbolt, 10G NICs, SATA, USB 3.1 or 3.2 Gen2 and so on. All PCs here have CPUs with at least 28 lanes. There are 2 with 44 lanes, and my new MSI Creator X299 supports 48.

The theory was put forward in another sub forum that PCIe X4 bandwidth may allow me to use mainstream CPUs like AMDs 3950X

I should mention it has been a goal to migrate my systems to AMD. However, AMD CPUs and motherboards here are just as expensive, if not more expensive than the Intel counterparts. Of course we have the pros and cons of team read vs team blue like over clocking headroom, thermals, stock and boost speeds and so on.

My infrastructure is essentially all 10G (XS512EM + 2 x MS510TX switches), supporting all but one PC downstairs which is still on gigabit. But to achieve this PCs had 10G NICs. The additional cost on motherboards seems to be as much as $400. 10G won't be mainstream for several years at least. So the need for those add in cards will persist.

But is it possible that using PCIe x4 motherboards could eliminate the need for HEDT motherboards and CPUs because of the extra bandwidth, even though all M.2 drives (eg those in expander cards), and others still work with only 16 PCIe x4.

It doesn't seem to me to be possible, but I have very little knowledge on how this works, hence the discussion thread.

The 3960X + Motherboard will set me back $3200, and the 3970X will set me back $4200. Intel on the other hand for the 10980XE will set me back $2400.

Power consumption, according to reviews I've seen in many places, will increase my running costs. With 10 PCs which will be converted over time this could translate is to thousands of extra dollars per year in electricity (currently 50 cents per KwH). Then there is potentially additional thermal overheads that needs to be dealt with. So going AMD will likely be much more expensive that staying Intel (as disappointing as that is).

So, I'd; like to start a discussion on PCIe x4 mainstream vs PCIe x3 on HEDT systems and how (if possible), to run multiple add in cards on a system and still get x16 for the graphics cards. Your thoughts?

<Changed thread from question to discussion - G-Unit1111>
 
Last edited by a moderator:

Starcruiser

Honorable
A Ryzen third generation processor on a X570 motherboard will have up to 36 usable PCI-e 4 lanes, after accounting for 8 being reserved for various things.
This is more than enough for the vast majority of people, and you'll see plenty of extras on motherboards just to try to use more lanes like extra M.2 slots.
Even if you managed to get 2 Radeon 5700XT running in x16 4.0 mode, you'd still have 4 lanes for a M.2 drive.

On the HEDT side of things, the Ryzen Threadrippers on the sTRX4 socket (with the TRX40 chipset) have up to 72 usable PCI-e 4 lanes after accounting for 16 reserved. This is clearly playing at using these for crypto mining, nothing else I can think of will come close to using that many, except maybe a specialized server.

More to your discussion point, most devices currently cannot use PCI-e 4 yet. Honestly, we're just getting to where version 3 x8 is getting saturated by high end GPUs so it will be some time before more mainstream devices use it. Once more devices can use it, it still won't make much sense moving everything to version 4. A single PCI-e 3 lane can carry 1000 MB of bandwidth. Using a 10 Gb expansion card would only use less than 2 lanes worth of data rate after overhead.
 
I've always stuck to HEDT systems because we run a combination of add in cards that use extra PCIe lanes. Such as Raid, thunderbolt, 10G NICs, SATA, USB 3.1 or 3.2 Gen2 and so on. All PCs here have CPUs with at least 28 lanes. There are 2 with 44 lanes, and my new MSI Creator X299 supports 48.

The theory was put forward in another sub forum that PCIe X4 bandwidth may allow me to use mainstream CPUs like AMDs 3950X

I should mention it has been a goal to migrate my systems to AMD. However, AMD CPUs and motherboards here are just as expensive, if not more expensive than the Intel counterparts. Of course we have the pros and cons of team read vs team blue like over clocking headroom, thermals, stock and boost speeds and so on.

My infrastructure is essentially all 10G (XS512EM + 2 x MS510TX switches), supporting all but one PC downstairs which is still on gigabit. But to achieve this PCs had 10G NICs. The additional cost on motherboards seems to be as much as $400. 10G won't be mainstream for several years at least. So the need for those add in cards will persist.

But is it possible that using PCIe x4 motherboards could eliminate the need for HEDT motherboards and CPUs because of the extra bandwidth, even though all M.2 drives (eg those in expander cards), and others still work with only 16 PCIe x4.

It doesn't seem to me to be possible, but I have very little knowledge on how this works, hence the discussion thread.

The 3960X + Motherboard will set me back $3200, and the 3970X will set me back $4200. Intel on the other hand for the 10980XE will set me back $2400.

Power consumption, according to reviews I've seen in many places, will increase my running costs. With 10 PCs which will be converted over time this could translate is to thousands of extra dollars per year in electricity (currently 50 cents per KwH). Then there is potentially additional thermal overheads that needs to be dealt with. So going AMD will likely be much more expensive that staying Intel (as disappointing as that is).

So, I'd; like to start a discussion on PCIe x4 mainstream vs PCIe x3 on HEDT systems and how (if possible), to run multiple add in cards on a system and still get x16 for the graphics cards. Your thoughts?
First, it's not PCIe x4 but v4.0 and second don't count on it for much longer, v5 and v6 standards have been set already waiting for next gen chips. It will likely go straight to v6.0. Number of PCIe lines is not limited to CPU only providing them. PCIev4 is not yet utilized properly by graphics but oh boy.... NVMe technology ca use them right now with tendency to be even faster.
 

TJ Hooker

Titan
Ambassador
A Ryzen third generation processor on a X570 motherboard will have up to 36 usable PCI-e 4 lanes, after accounting for 8 being reserved for various things.
This is more than enough for the vast majority of people, and you'll see plenty of extras on motherboards just to try to use more lanes like extra M.2 slots.
Even if you managed to get 2 Radeon 5700XT running in x16 4.0 mode, you'd still have 4 lanes for a M.2 drive.
16 of the lanes available on an X570 platform are from the chipset, so all of those lanes have to share the x4 bandwidth available between the CPU and chipset. And I believe that any individual slot from the chipset can only be up to x4. So you cannot run x16/x16 with an x570 board.

The max concurrent bandwidth available on an X570 board is equivalent to 24 lanes: x16 to the primary PCIe slot (or divided among several slots, x8/x8 or x8/x4/x4), x4 for an M.2 slot, and x4 for all other PCIe devices combined (through the chipset).

However, most people don't actually need full bandwidth to all their devices at the same time, so having a bunch of peripherals sharing a x4 connection may not be an issue.
 
D

Deleted member 2783327

Guest
Whether v4, v5,v6, the point was with a RAID card, a 10G NIC, a 2 port USB 3.2 Gen2 card using all of the available PCIe slots on the motherboard, will the available band for the graphics card still be greater than or equal to that available on v3 motherboard? Some of these cards use x2, some x4 and some x8.

Then we have another PC which uses a 10G NIC, a four port M.2 expander card and a USB 3.2 Gen2 2 port card. Same question. Yet another with a 10G NIC, Thunderbolt card and a Sata 4 port expansion card. Same question.

So how do you calculate the total bandwidth requirement and measure it against the available bandwidth on an X570 motherboard and compare it against a HEDT (Intel or AMD) motherboard.

I keep running into the same brick wall "graphics cards are the only reason why anyone would need extra lanes". There are no, and never will be any system running dual graphics cards here, but contrary to popular opinion, there are a vast array of devices that plug into PCIe slots. Again, Raid, thunderbolt, usb, sata, M.2 expansion just to name a few.

The total lane usage/requirement for the lowest of the PCs is 28 lanes, hence the HEDT standard. The most is 40. I'm hamstrung by the lack of slots on motherboards, typically no more than 4. I have one PC where I could use an extra slot.

I'm trying to understand how this all works. Is it possible to have all these add in cards using all these lanes on a mainstream platform (X570 as it's currently the only platform supporting PCIe v4), and still have the bandwidth to run all these at at least as fast as they run on the HEDT PCIe v3 platforms?

Same time? It would seem counter productive to have a graphics card at running at x16, and as soon as I run a 30 minute backup across my USB 3.2 Gen2 card out to my external 12TB Silverstone enclosure that the graphics card drops to x8. That's just an example.

And yes, we are not talking about "most people". We are talking about very specific use cases. The information is useful to me as I run 10 such system. I know a couple of others in the same situation. Surely, understanding this and having the information available on a popular place such as Tom's would be of value to others in the same boat.
 

TJ Hooker

Titan
Ambassador
First issue is whether or not your RAID/10G ethernet/graphics cards cards are available as PCIe 4.0 cards, and if it makes sense financially to buy the 4.0 models. If you're not willing and/or able to buy 4.0 cards, then the fact that X570 offers PCIe 4.0 doesn't help you much.

As far calculating max theoretical bandwidth, look at how many lanes each CPU has. Your X299 CPUs have 28 or 44 lanes, plus 4 to the chipset. They're all PCIe 3.0, which is ~1GB/s per lane. X570 is 24 lanes at ~2GB/s for PCIe 4.0.

Same time? It would seem counter productive to have a graphics card at running at x16, and as soon as I run a 30 minute backup across my USB 3.2 Gen2 card out to my external 12TB Silverstone enclosure that the graphics card drops to x8. That's just an example.
What I said about using things at the same time only applies to devices connected through the chipset, i.e. not your graphics card. Let's say you have an PCIe 3.0 x4 SSD and your 2-port USB 3.2 10 Gbps card connected through the chipset. In theory those two cards could use up to 52 Gbps of bandwidth (32 for the NVMe card, 20 for the USB card), whereas the chipset link would only provide up to 32 Gbps for both of them.

But how likely is it that both of those cards are actually going to need anywhere close to that? Remember that speed is also limited by the actual device using the PCIe link. An SSD is unlikely to actually achieve anywhere near 32 Gbps in any real world scenarios (and if they do it will usually only be for a short time, as they typically can't sustain those kinds of speeds). And how often are you going to be maxing out both 10 Gbps ports on your USB card? If your 12TB enclosure consists of HDDs, your speed there is going to be limited by disk I/O rather than interface speed.

In that example, the impact of having to share bandwidth might have little to no effect.

And yes, we are not talking about "most people". We are talking about very specific use cases. The information is useful to me as I run 10 such system. I know a couple of others in the same situation. Surely, understanding this and having the information available on a popular place such as Tom's would be of value to others in the same boat.
Ok, but you didn't actually tell us what your "very specific use case" is. We know you use a bunch of add in cards, but not how you use them.
 
Last edited:

TJ Hooker

Titan
Ambassador
From a quick bit of searching it looks like USB 10Gbps cards are typically PCIe 3.0 x4, as are 10 Gbps NICs. Does that sound right?

Which RAID card do you have, or how many lanes does it use? Is your 4 port M.2 expander for NVMe or SATA drives? If it's the former, I'd imagine it's a x16 card. Do you actually use all 4 ports on the card?
 
D

Deleted member 2783327

Guest
We're getting away from the original point of the discussion. PCIe v4 is backwards compatible. I'm not talking about upgrading all my devices to PCIe v4. I'm talking about using all of the existing PCIe v3 cards on a PCIe v4 motherboard that has less than 28 lanes.

The question seems to be how are PCIe lanes allocated? If I plug in a x4 device, are those 4 lanes allocated to that device at that time, regardless of whether there is traffic through the device or not? So if I plug in devices whose lane "specifications" are x4 or x8 are those lanes no longer available to the rest of the system.

If I therefore have a x16 graphics card, an x4 USB 3.2Gen2 card, a x4 10G NIC and a x8 RAID card, I've used 32 lanes. How will that fare on an X570 motherboard?

Or are lanes only allocated at the time of actual use, and the number of lanes used is dependent on the volume of traffic going through the device at that time. So if I have a x4 USB 3.2Gen2 card and I'm transferring data at 230MB/s sustained (which is what I actually get writing to an external drive via these cards), then how many lanes are taken away from the CPU's total? All 4 or only as many as needed to provide the 230MB/s (1.84 Mbps)?

So, if I have all of the cards plugged into a CPU that supports 20 or 24 lanes, and the total specifications of all of those cards is 28 or greater, am I hamstrung.

The M.2 expander cards are x4 for the 2 port variety and x8 for the 4 port variety. The 10G NICs are x4. The USB 3.2Gen2 are x4 and the SATA cards are x2 or x4 depending on how many ports they have.

I really don't know how else to phrase this.
 
D

Deleted member 2783327

Guest
Actually this kind of puts things in perspective.

Ryzen 3950X (16+4+4 PCie v4) $1300. X570 Motherboard (MSI) $1200 = $2500.
Intel I9-10980XE (48 PCIe v3) $1700. X299 MSI Creator motherboard $900 = $2600 (10 SATA, Onboard 10G LAN)

Threadripper 3960X (64 PCIe v4) $2200, MSI Creator TRX40 $900 = $3100 (6 SATA, 3x M.2, Onboard 10G LAN)
Threadripper 3970X (64 PCIe v4) $3200, MSI Creator TRX40 $900 = $4100 (6 SATA, 3x M.2, Onboard 10G LAN)

All run 2xM.2 off CPU and 1xM.2 off chipset. 4 of the 10 PCs have 3xM.2 NVMe drives (I dont use SATA SSDs)

The 3950X has a single digit performance lead on the 10980XE at $100 less, but I lose 4 SATA ports (more if I use 3xM.2), and I still have to have the 10G NIC. The Intel gives me 4 more SATA ports, 10G onboard, so I can remove the add in card.

I don't know whether I could have put all the add in cards on an X570, that still remains to be answered. but interms of cost/PCIe lanes, the choice is clear.