News PCIe standards group releases draft specification for PCIe 7.0 — full release expected in 2025

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I think what you're missing is that downstream adoption times are relatively steady. It takes around 3 years from the version release to where each stage of the downstream pipeline has adopted the technology.
No, the standards haven't been released at a very steady pace, nor is the adoption pipeline as long or consistent as you say.

PCIe 3.0 was finalized in 2010 and first implemented in Intel server CPUs in 2011.
PCIe 4.0 was finalized in 2017 and first implemented in IBM POWER systems in 2018. AMD followed in 2019 (Intel's implementation was hampered by delays affecting its Ice Lake server CPUs).
PCIe 5.0 was finalized in 2019 and first implemented by Intel in 2021's Alder Lake.
PCIe 6.0 was finalized in 2022 and has yet to be implemented by any CPUs currently on the market.

They can't NOT announce when the specification is ready,
The PCIe SIG is an industry consortium and their pace is somewhat driven by industry needs and priorities. I can't say exactly why PCIe 4.0 took so long, but the rapid pace of progress since then has definitely been driven by urgent demands from modern datacenter networking and AI, at least according to the messaging in their press releases.

For your benefit, i'd recommend tuning out the specification announcement, and wait for the first motherboard announcement. Then you'll never have this 'waiting' problem.
Of course, the fact that we're reading these articles suggests most of us like seeing what's coming, well before it actually arrives. With PCIe 7.0, it's conceivable we might never see it in a consumer platform. By the time these systems or devices ever need so much bandwidth, PCIe might already have been superseded by something else.
 
A simple example of Zen4 HX - it is already more than 2 years out of date (but it is still the fastest mobile processor in the world in the x86 camp - 7945HX). It has 28 5.0 lanes (24 are free). The video card is connected via 4.0 x8 most often, at best x16, i.e. it is only 8 5.0 lanes. M.2 5.0 x4 slots are pointless in laptops - too high power consumption of controllers and heating of nand chips with disgusting air draft around M.2 slots - a priori incorrect architectural planning of the case and motherboards of laptops, plus the impossibility of installing effective large-sized radiators (or there will be increased noise).
I'm not sure why you're focusing on laptops at all, since desktops are clearly the consumer platform with the most advanced PCIe implementations. FWIW, Ryzen 9950X can achieve 77.7 GB/s of real memory (read) bandwidth, using DDR5-6000 memory.

In fact, Zen4 HX uses less than half of the 24 available lanes in 5.0 mode. In addition, 28 5.0 lanes are more than 100GB/s, if all at the same time
First, it can only run 24 lanes in PCIe 5.0 mode. The nominal, unidirectional data rate of that would indeed be ~96 GB/s. Add in the x4 chipset link running at 4.0 speed and you get 104 GB/s. However, this is a fictitious use case that basically never happens in the real world. I'll agree that PCIe 5.0 doesn't make a whole lot of sense in a consumer platform, right now.

As for lane count vs. memory bandwidth, I once had an AMD 890FX board with 42 PCIe 2.0 lanes! That's ~21 GB/s of aggregate (unidir) bandwidth, which is beyond the real memory bandwidth of that platform and certainly beyond the HyperTransport 3.0 link, which I think is limited to less than 17.6 GB/s (unidir). Just to put another data point out there.

And what do we see? Zen5 Halo for the first time in consumer x86 gets a 256-bit controller, which in reality should have been in Zen4 HX 2 years ago, which would allow you to easily service all 28 lanes of 5.0 with a reserve for OS/software - they also need to work ...
That's a laptop chip, though. It's really not designed to undertake heavy I/O loads. You should instead focus on Threadripper or Epyc.

To service 7.0, you need RAM with a bandwidth of 1TB/s, and only HBM3 controllers with a 1024-bit bus provide this.
Even at 28 lanes, it's still just 448 GB/s (unidir), but just because PCIe is a switched fabric doesn't mean systems are (or even should be) designed to support the max theoretical aggregate throughput. Especially in consumer devices, I/O tends to be very bursty and only one device is usually bursting at a time.

5.0 SSDs are not needed by anyone in the consumer segment at all, even in the gaming segment
I generally agree that they're overkill, while only offering marginal benefits in loading time on some of the slowest-loading games. However, new controllers should help with power consumption:

Here's a review of a drive using that SM2508 controller:

Peak power is still rather high, at 9.13 W, but the average (50 GB folder copy test) is just 4.98 W, which I think puts it towards the upper end of the range of PCIe 4.0 drives.

For video cards, a thicker bus matters if the RAM is approximately (empirically) about 3 times faster than this channel. That is, if you want to download data to vram dgpu at a speed of 128GB/s, i.e. pcie 6.0 x16 (5090 has 1700+ GB/s VRAM bandwidth), you will need a 350GB/s memory bus at least for general tasks, so that everything runs smoothly and efficiently in parallel with dgpu.
I'd love to see some data supporting this assertion. I think it's pretty far off the mark.
 
afaik nothing even saturates a 5.0
Need to distinguish between client vs. server. In client PCs, the only PCIe 5.0 we've seen are SSDs and some of the faster ones can peak near the practical bandwidth limits of PCIe 5.0, for what that's worth.

With video cards, the case is far weaker. This article looks at the effect of PCIe speed on game performance, using a RTX 4090 and Raptor Lake i9. At best, PCIe 4.0 x16 can give you a few extra %. With PCIe 5.0, we'll be ever further down the curve of diminishing returns.


If games still had good multi-GPU support, then PCIe 5.0 would be useful for letting you run two graphics cards at PCIe 5.0 x8 each.

I guess the other gaming-related argument for faster PCIe speeds is when you're using settings that overstress your graphics card's video memory, such that assets can't be kept in GPU memory for long and are continually having to be re-fetched from host memory. However, even though faster PCIe will benefit such cases, it's probably still not going to help enough to make them very playable like that.

Seems liek a pointless advancement thats just costlier to implement and make stuff mroe expensive for enduser
These faster PCIe speeds are only aimed at servers (for the foreseeable future, at least). You can safely ignore it.
 
Since the 80's people have been saying "We don't need that" when computers were advancing. Things like: "Any monitor greater than 14" is too big", "We don't need that much memory", "We don't need more than 16 colors", "We don't need anything more than a beeper speaker for sound output", "A 286 is more than anyone will need", "AGP ports are more than enough, (insert every other card slot standard since then)".
It's different when you've already got capacity you're not close to utilizing. PCIe 4.0 would be enough for today's client PCs - even PCIe 5.0 is slightly overkill. There's plenty of room left to grow, before we'd even need to think about advancing client PCs to the next step on the ladder.

Just because we currently don't have the hardware/software to support an new standard, does not mean we should reject that new standard. I want standards groups to keep ahead of what is currently supported.
A standard costs money to develop and implement. It's also rooted in the technology that's available at the time. If they tried to achieve the same goal at a later point in time, their approach might've differed because they have a different arsenal of technology available.

So, it's a little bit silly to say that we should just develop faster standards for its own sake, not that I think that's remotely what's going on. The need for faster PCIe and CXL exists, just not in client platforms.

But the marketing numbers must go up. People will buy 5.0 capable hardware, just because it says 5.0.
That's sort of my theory on why Intel put PCIe 5.0 in LGA1700. They got embarrassed by AMD beating them to PCIe 4.0 by about year.

The other reason might've been simply to have a PCIe 5.0 development platform. Either way, they got ahead of what the market really needed and a lot of people ended up wasting money on higher motherboard prices for a capability they might never need.

In terms of cost, 6.0/7.0 4x lanes might actually let them simplify lower end GPUs. Right now 8x is mid-range, but 4x will likely be more so, especially for mobile GPUs.
In terms of power, it's usually more efficient to run a wider link at a lower speed. PCIe 6.0 might be a win for power efficiency, since it gets the extra speed from PAM4, instead of a clockspeed bump. However, that also means it requires better signal-to-ratio and that probably means more expensive PCBs.
 
Last edited:
You know what would solve a lot of problems with desktop mobo lack of PCIe lanes and an ever increasing pin count to add more lanes? a PLX switch.
You know who's bought out and cornered the market for that and priced it sky high? Avago/Broadcom.
Like @thestryker said, Intel & AMD could simply boost the speed of the link to their chipset, which already contains a PCIe switch. Intel is sitting pretty good at PCIe 4.0 x8, but AMD is actually hurting a little bit from keeping this link at PCIe 4.0 x4, not to mention the whole daisy-chaining thing they do in higher-end AM5 boards.

If you put a PCIe 4.0 x4 SSD in a chipset-connected M.2 slot of an AMD motherboard, there's a modest (but real) performance deficit vs. using a CPU-direct link. I'd have rather AMD just beefed up its chipset link, in AM5, than to have so many CPU-direct, PCIe 5.0-capable lanes.

 
PCIe and general motherboard interconnects are out dated..
We need a new system of multiple big bus routes, many buses, many lanes, all defined in the BIOS.

For very basic example and I'm sure somebody more technical can flesh it out ..

Your a server motherboard and have been designed with 3x buses and 128 lanes ..

Bus 1 .. 16 lanes
1. 1x lane for simple gpu
2. 2x lane for multiple USB2's, Kbd/Mse, UPS connection and the like
3. 4x for 2x 1G LAN NICS + 1G Remote NIC
4. 6x lanes for other stuff
5. 3x for motherboard interconnect

Bus 2 .. 64 lanes
1. 48x lanes for CPU-2-Memory
2. 16x for motherboard interconnect

Bus 3 .. 48 lanes
1. 32x lanes for multiple storage controllers
2. 8x for other stuff
3. 16x for motherboard interconnect
Other than memory, I think you pretty much described what PCIe does.
 
I did a double take when I read the title.

AFAIK PCI-e 5 only recently had affordable NVMe drives put up for sale (last year's offering seemed to pricey for anyone but the enterprise sector), and modern GPUs on PCI-e 5 still don't even use close to the bandwidth that it offers.

6 currently exists and has done since 2022....but not for consumers in any way at all, so to say that 7 will be ready in 2025 seems like redundant news for the vast majority of people if we won't even get devices that can use it for another 3 years... and even then it seems like the device manufacturers are struggling to even make devices that can fully utilise the full speed, even the upcoming Nvidia 50-series is still on 5 rather than 6

I'm probably wrong on this, but it feels like PCI-SIG are outpacing manufacturers so fast that the "every three years" plan is too quick for anyone to catch up, even taking into consideration they they will design future products around the extra bandwidth, I get the impression that they will be "releasing" PCI-e 12 by the time we finally get devices that can take advantage of 7 and the gap is just going to keep getting larger.

I'm waiting for the day they announce they are ready to release PCI-e 50 by the time we get devices that support 9.
Agreed. Only thing, pretty sure there isn't any GPUs currently using pcie5. First GPUs are going to be the 50-series coming out at the end of the month? So yeah for them to say that PCIe 7 is in the wild is completely useless...