News PCIe 6.0 Spec Release Date on Track, Version 0.5 Announced

Feb 6, 2020
23
5
15
0
Given that AMD and Intel both mentioned plans to jump to PCIe 5.0 in 2021 onwards, I wonder if they will stay with PCIe 5.0 or jump to 6.0 another year or two later.

While it makes sense to futureproof hardware, I wonder if it's more convenient to just have their CPUs "PCIe 6.0 ready" but on motherboards, split the bandwidth across more PCIe 5.0 or even PCIe 4.0 lanes. Bringing back the age of having up to 7 x16-sized slots on XL-ATX motherboards or more staggered 4 x16-sized slot motherboards (like the TRX40 Aorus Extreme; requiring 8-9 slots) and NVMe drives between the slots.

For one, I recall that PCIe 4.0 is already expensive due to needing to add whatever signal repeater chips are necessary to get full slot coverage, which was something that limited full PCIe 4.0 to X570 and TRX40 motherboards while B550A and B550s will supposedly only have the first M.2 and PCIe x16 slot at 4.0.

For another, while AMD has ignited a fire under makers of PCIe-based add-on cards to innovate for PCIe 4.0, it's still going to be a long time before any can really make full use of that bandwidth, so downgrading CPU-based PCIe 5.0 or 6.0 to 4.0 but adding more slots seems like a reasonable middle ground.
 
Reactions: bit_user

bit_user

Splendid
Ambassador
I wonder if it's more convenient to just have their CPUs "PCIe 6.0 ready" but on motherboards, split the bandwidth across more PCIe 5.0 or even PCIe 4.0 lanes.
If they continue having a separate I/O die, then desktop CPUs won't just inherit PCIe 5.0+ "for free".

As you rightly point out, faster interconnect speeds is a costly proposition. I don't see it coming to desktops, though it'd be interesting if they upgraded the CPU <-> chipset with a PCIe 5.0 or PCIe 6.0 link. As you point out, that could enable more/wider/faster chipset-connected slots.
 

srimasis

Distinguished
Nov 9, 2012
523
2
19,365
149
So is it possible to have proper SLI or Crossfire with pcie 6.0 without any bridge since GDDR6 bandwidth is 72 GBps and pcie 6.0 bandwidth is 128 GBps. I hope to see proper dual gpu compatibility like having two 8gb GPUs combination would act as one 16gb GPU.
 

InvalidError

Titan
Moderator
So is it possible to have proper SLI or Crossfire with pcie 6.0 without any bridge since GDDR6 bandwidth is 72 GBps and pcie 6.0 bandwidth is 128 GBps.
72GB/s for GDDR6? For a RX5700XT that's 16GT/s x 256bits = 512GB/s.

Even if mainstream CPUs got PCIe6, it would still be split x8x8 so you lose half the bandwidth there. Then you also lose performance to the GPU-bus-host-bus-GPU-bus-host-GPU roundtrip and other overheads inherent to accessing remote memory pools.

If you want to make multiple GPUs behave like a single larger one, you'll need even more bandwidth and lower latency than that to account for all of the other internal processes that require bandwidth too but never have to leave the die in single-GPU setups.
 
Reactions: bit_user

bit_user

Splendid
Ambassador
72GB/s for GDDR6? For a RX5700XT that's 16GT/s x 256bits = 512GB/s.
FWIW, I guess AMD is using lower-clocked GDDR6 than wherever that 72 GB/s is from, because the rated spec of RX 5700XT is 448 GB/sec. That detail aside, I agree with your basic point, which seems to be that one needs to look at the bandwidth for the whole GPU.

If you want to make multiple GPUs behave like a single larger one, you'll need even more bandwidth and lower latency than that to account for all of the other internal processes that require bandwidth too but never have to leave the die in single-GPU setups.
Yeah, I think latency is a killer. That is, if you want to use multiple GPUs to reach really high frame rates.

However, if you're trying to use multiple older or lower-end GPUs to make a game barely playable, then I think multi-GPU setups are potentially a lot more interesting.
 
Feb 6, 2020
23
5
15
0
If they continue having a separate I/O die, then desktop CPUs won't just inherit PCIe 5.0+ "for free".

As you rightly point out, faster interconnect speeds is a costly proposition. I don't see it coming to desktops, though it'd be interesting if they upgraded the CPU <-> chipset with a PCIe 5.0 or PCIe 6.0 link. As you point out, that could enable more/wider/faster chipset-connected slots.
Using AMD as an example; I meant something like a theoretical "6000" or "7000" series Ryzen/TR CPU having PCIe 6.0 on-die, but motherboards outside of specialized Enterprise/Commercial downgrading that to PCIe 5.0 or even 4.0 bandwidth to allow for more slots while still providing some future-proofing via future motherboards (easier on AMD with socket sharing over 3-4 chipset generations, questionable on Intel).

Especially since by the time PCIe 6.0 becomes an actual adopted thing, 4.0 and 5.0 signal repeaters will likely have come down in costs some due to aggressive adoption by AMD and Intel down to the consumer level, or advances in material design would be advanced enough allowing for boards not needing as many (or not needing any) repeaters for 4.0 or 5.0 traces, making them cheaper than what's needed for 6.0. And further, no company yet has any consumer-level way of actually saturating PCIe 4.0, much less 5.0, and unlikely will for at least another 5-10 years.

On the middle/mainstream end of motherboards (B-series on AMD), we could maybe even see an on-die PCIe 6.0 downgraded further to mostly PCIe 3.0, for all slots except the 1st M.2 and GPU slot. So maybe PCIe 4.0 M.2 and x16 slot, and the remainder x16 or x8 full-length PCIe 3.0 slots. Low end motherboards (AMD A-series mobos) may see the 6.0 downgraded to all 3.0 but sufficiently useful for those "porting over" older systems that still rely on older PCIe 2.0 and 3.0 cards (specialized audio and video cards, old networking cards, etc) and providing full x16 across several slots.
 

bit_user

Splendid
Ambassador
Especially since by the time PCIe 6.0 becomes an actual adopted thing, 4.0 and 5.0 signal repeaters will likely have come down in costs some due to aggressive adoption by AMD and Intel down to the consumer level
5.0 needs retimers. As I'm not an electronics engineer, I can't comment on the potential for cost reductions, but they'll never cost as little as not having any.

One of the articles below puts the cost of PCIe 4.0 x16 retimers at $15 to $25 (parts cost - not final selling price). Presumably, PCIe 5.0 retimers would cost even more, given the need to run at double the frequency. Of course, adding any components has its own costs in terms of assembly and possibly board layer count.

or advances in material design would be advanced enough allowing for boards not needing as many (or not needing any) repeaters for 4.0 or 5.0 traces
You mean like room temperature superconductors? That would indeed be a game changer.

Short of that, I don't really see how material design can have an impact. With so many decades of R&D having already gone into PCB technology & design techniques, I find it hard to believe there are still potential refinements big enough to avoid the need for retimers, or even reducing board costs & complexity to the level we currently have with PCIe 4.0 boards.

For more on the added costs & technical challenges posed by PCIe 5.0, it's best to have a look at the electronic design trade publications. These articles are aimed at the design engineers tasked with using these technologies in real products.

I will say the discussion of cables is interesting. If there would be a sufficient need for the bandwidth, that could be the way the industry goes.

On the middle/mainstream end of motherboards (B-series on AMD), we could maybe even see an on-die PCIe 6.0 downgraded further to mostly PCIe 3.0, for all slots except the 1st M.2 and GPU slot. So maybe PCIe 4.0 M.2 and x16 slot, and the remainder x16 or x8 full-length PCIe 3.0 slots.
The only possibility I'm seriously entertaining is PCIe 5.0 between the CPU and chipset, and only because the chipset is soldered and can be located near the CPU. Perhaps 6.0 could eventually follow, if there's a sufficient need to justify the additional complexity and power consumption.

It's hard to see a use case, though. NVMe SSDs already have faster sequential speeds than desktop users actually need. So, we'd have to see a stronger market demand for multi-GPU setups than I think we've seen, lately.
 
Last edited:
Reactions: TechLurker

InvalidError

Titan
Moderator
Just as a 386DX was "for server rooms only"...this too shall eventually trickle down to the consumer space.
Eventually.
Due to the signal integrity challenges of pushing 50+Gbps over consumer-friendly PCB materials, my bet is that the consumer space will go with optical plug-in interconnects first. If PCIe 6.0 makes it to consumer boards, I doubt it'll go beyond chips soldered to the motherboard like the chipset.
 
Reactions: bit_user

bit_user

Splendid
Ambassador
That would be quite the feat. Given that it would be an order of magnitude smaller than a hydrogen atom. Which has a radius between 0.025 and 0.086 nm.
For sure. I was simply trying to illustrate that every trend breaks down.

Eventually.


The trick to extrapolating effectively is not to do it blindly, but to use it in a context where you know whether the underlying principles behind the trend will still hold.

...but, saying it that way is boring and doesn't grab people's attention. I probably should've gone with clock speeds, instead. I'll remember that for next time.

Blind extrapolation seems to be used most heavily in business. <insert rant about MBAs>

Edit:
 
Last edited:
Reactions: velocityg4

ASK THE COMMUNITY