News PCIe standards group releases draft specification for PCIe 7.0 — full release expected in 2025

PCIe and general motherboard interconnects are out dated..
We need a new system of multiple big bus routes, many buses, many lanes, all defined in the BIOS.

For very basic example and I'm sure somebody more technical can flesh it out ..

Your a server motherboard and have been designed with 3x buses and 128 lanes ..

Bus 1 .. 16 lanes
1. 1x lane for simple gpu
2. 2x lane for multiple USB2's, Kbd/Mse, UPS connection and the like
3. 4x for 2x 1G LAN NICS + 1G Remote NIC
4. 6x lanes for other stuff
5. 3x for motherboard interconnect

Bus 2 .. 64 lanes
1. 48x lanes for CPU-2-Memory
2. 16x for motherboard interconnect

Bus 3 .. 48 lanes
1. 32x lanes for multiple storage controllers
2. 8x for other stuff
3. 16x for motherboard interconnect
 
I did a double take when I read the title.

AFAIK PCI-e 5 only recently had affordable NVMe drives put up for sale (last year's offering seemed to pricey for anyone but the enterprise sector), and modern GPUs on PCI-e 5 still don't even use close to the bandwidth that it offers.

6 currently exists and has done since 2022....but not for consumers in any way at all, so to say that 7 will be ready in 2025 seems like redundant news for the vast majority of people if we won't even get devices that can use it for another 3 years... and even then it seems like the device manufacturers are struggling to even make devices that can fully utilise the full speed, even the upcoming Nvidia 50-series is still on 5 rather than 6

I'm probably wrong on this, but it feels like PCI-SIG are outpacing manufacturers so fast that the "every three years" plan is too quick for anyone to catch up, even taking into consideration they they will design future products around the extra bandwidth, I get the impression that they will be "releasing" PCI-e 12 by the time we finally get devices that can take advantage of 7 and the gap is just going to keep getting larger.

I'm waiting for the day they announce they are ready to release PCI-e 50 by the time we get devices that support 9.
 
Remember that PCIe is not just "that thing I use for GPUs and sometimes storage drives".
PCIe is the underlying link for multiple other systems. This includes not only peripheral connections like UCIe, CXL, but also internal package and chip interconnects (e.g. AMD's Infinity Fabric, Intel's DMI, and the links between chipset cores and peripheral interfaces like USB and SATA controllers). All of these benefit from advancements of the PCIe bus, even if it never filters out to consumer components.
 
Instead of pushing more data and faster they could devise a method to do it smarter. Each PCIe version increases prices and the benefit for regular user is very little.

I think what you're looking for here is known as 'why don't they use magic instead'. Because of course, they are always looking for options to improve the design. That's why the PAM4 move.

They don't have to worry about regular users, either. If the newer designs offer no benefit, motherboard manufacturers are completely free to use older and cheaper designs, and if the price difference matters to users vs the modest increment in capability, those older and cheaper designs will sell.
 
  • Like
Reactions: Chrys and Sluggotg
I did a double take when I read the title.

AFAIK PCI-e 5 only recently had affordable NVMe drives put up for sale (last year's offering seemed to pricey for anyone but the enterprise sector), and modern GPUs on PCI-e 5 still don't even use close to the bandwidth that it offers.

6 currently exists and has done since 2022....but not for consumers in any way at all, so to say that 7 will be ready in 2025 seems like redundant news for the vast majority of people if we won't even get devices that can use it for another 3 years... and even then it seems like the device manufacturers are struggling to even make devices that can fully utilise the full speed, even the upcoming Nvidia 50-series is still on 5 rather than 6

I'm probably wrong on this, but it feels like PCI-SIG are outpacing manufacturers so fast that the "every three years" plan is too quick for anyone to catch up, even taking into consideration they they will design future products around the extra bandwidth, I get the impression that they will be "releasing" PCI-e 12 by the time we finally get devices that can take advantage of 7 and the gap is just going to keep getting larger.

I'm waiting for the day they announce they are ready to release PCI-e 50 by the time we get devices that support 9.

I think what you're missing is that downstream adoption times are relatively steady. It takes around 3 years from the version release to where each stage of the downstream pipeline has adopted the technology. They can't NOT announce when the specification is ready, and they can't change how long it takes to build chips from the day that the specification is complete.

For your benefit, i'd recommend tuning out the specification announcement, and wait for the first motherboard announcement. Then you'll never have this 'waiting' problem.
 
  • Like
Reactions: Chrys and Sluggotg
PCIe and general motherboard interconnects are out dated..
We need a new system of multiple big bus routes, many buses, many lanes, all defined in the BIOS.

For very basic example and I'm sure somebody more technical can flesh it out ..

Your a server motherboard and have been designed with 3x buses and 128 lanes ..

Bus 1 .. 16 lanes
1. 1x lane for simple gpu
2. 2x lane for multiple USB2's, Kbd/Mse, UPS connection and the like
3. 4x for 2x 1G LAN NICS + 1G Remote NIC
4. 6x lanes for other stuff
5. 3x for motherboard interconnect

Bus 2 .. 64 lanes
1. 48x lanes for CPU-2-Memory
2. 16x for motherboard interconnect

Bus 3 .. 48 lanes
1. 32x lanes for multiple storage controllers
2. 8x for other stuff
3. 16x for motherboard interconnect
This is what PCI is. You can partition it this way if you want. The fact that no one particularly picks your '3 bus' design is just an artifact of what they find to be the most efficient allocation.

Or to put it another way: since you can already do exactly this with PCI, the fact that no one has means that the cost/benefit of doing so doesn't look appealing. Basically you are asking for more lanes than consumers currently get, and the manufacturers think you don't need it. Because if you want more lanes you can get a workstation / server part, and you would be getting pretty much exactly what you're asking for. It just costs more. Why aren't you (and everyone) buying such parts?
 
Last edited:
  • Like
Reactions: Chrys
and modern GPUs on PCI-e 5 still don't even use close to the bandwidth that it offers.
Until now there were no consumer chips for the 5.0 bus. Where did you get this from, can you provide a link to such a dgpu chip in 2024?


If you want to understand why 5.0 pci-e and especially 6.0+ are pointless in consumer chips x86 (but not server ones with HBM3+ memory and 1024-bit memory bus) - read the discussion and especially my comments in this news topic.

In short:
A simple example of Zen4 HX - it is already more than 2 years out of date (but it is still the fastest mobile processor in the world in the x86 camp - 7945HX). It has 28 5.0 lanes (24 are free). The video card is connected via 4.0 x8 most often, at best x16, i.e. it is only 8 5.0 lanes. M.2 5.0 x4 slots are pointless in laptops - too high power consumption of controllers and heating of nand chips with disgusting air draft around M.2 slots - a priori incorrect architectural planning of the case and motherboards of laptops, plus the impossibility of installing effective large-sized radiators (or there will be increased noise).

In fact, Zen4 HX uses less than half of the 24 available lanes in 5.0 mode. In addition, 28 5.0 lanes are more than 100GB/s, if all at the same time (the limiting case). And the real memory bus bandwidth of this series is 60-65GB/s. This is 2 (two) times worse than the full load on 28 lanes of 5.0 requires. Now imagine what will happen with 6.0 28 lanes or even worse 7.0 (even if you remove the factor of poor technical process and heating of controllers).

To service such requests, you need a memory bus with a real bandwidth several times higher.

And what do we see? Zen5 Halo for the first time in consumer x86 gets a 256-bit controller, which in reality should have been in Zen4 HX 2 years ago, which would allow you to easily service all 28 lanes of 5.0 with a reserve for OS/software - they also need to work ...

To service 7.0, you need RAM with a bandwidth of 1TB/s, and only HBM3 controllers with a 1024-bit bus provide this.

Forget about 6.0 devices in PC/laptops until the RAM can easily pump 300GB/s+ in the mass case - at least in the "gaming" segment - this is the level of 512+ bit memory bus at current frequencies of memory modules

5.0 is just coming into its own with the 256-bit Zen5 Halo controller. The 128-bit ArrowLake HX is simply not suitable for this - its throughput is much lower than that of the AMD chip.

--
5.0 SSDs are not needed by anyone in the consumer segment at all, even in the gaming segment - they are too hot and still too slow, especially in 4k IOPs (the difference with RAM in Apple M4 Pro when exchanging in random 4k blocks is more than 100 times - 120MB/s(SSD) vs 12000MB/s(RAM)).

Until SSD controllers (NAND chip consumption) drop below 5-6W in total, which will allow them to be easily installed in laptops without radiators or with very small ones - they do not matter.

In desktops, they are only interesting to those who are engaged in non-linear editing of video in RAW quality and similar loads. In games, 5.0 SSDs are of practically useless.

For video cards, a thicker bus matters if the RAM is approximately (empirically) about 3 times faster than this channel. That is, if you want to download data to vram dgpu at a speed of 128GB/s, i.e. pcie 6.0 x16 (5090 has 1700+ GB/s VRAM bandwidth), you will need a 350GB/s memory bus at least for general tasks, so that everything runs smoothly and efficiently in parallel with dgpu. For 7.0 (256 GB/s x16), you will need RAM performance in the x86 controller already closer to 1TB/s, so that everything runs smoothly in parallel.

This will not happen in the next 10 years in x86.
 
Last edited:
afaik nothing even saturates a 5.0 let alone 6.0....and until they figure ways to counter thermals whats point of improving it even more??

Seems liek a pointless advancement thats just costlier to implement and make stuff mroe expensive for enduser
 
In servers it pays off. In the mass retail segment of course not, until the coming "silicon dead end" is overcome - the rapid transformation of the performance growth curve per 1W of consumption practically into a plain, if you look at it on an exponential scale over the past 60 years. We all want this - but this is already a task of big science, fundamental science and the coordinated work of many people. And the performance of the brain and its insightful abilities are finite with the growth of information volumes and the increasing complexity of internal associations for the emergence of any new "insights" (people increasingly walk between the figurative "3 pines" without seeing the forest itself behind them) that must be kept in mind. And such tasks are increasingly "parallelized" into large teams, since their efficiency of joint work also quickly falls, despite the groundwork for the automation of their work. Therefore, such desperate efforts are invested in the development of neural networks, i.e. an attempt to create AI so that it would surpass these growing biochemical limitations of the brain, biological civilization as a whole and give new fundamental solutions and discoveries, giving us a breakthrough to a new "energy" level of possibilities, including individual ones.

Quantum computers and photonics are one of such topics, but so far these are mostly laboratory works and extremely narrow practical tasks in semi-laboratory solutions - for example, attempts to open the current stable asymmetric encryption (to freely read the correspondence of opponents-competitors), using quantum methods. Or some tasks of modeling chaotic processes.

And household electronics are unlikely to be able to offer something amazing in the next 10-15 years - it is simply becoming impossible in the "pocket", with the emerging dead end in the growth of performance per 1W of consumption. It is no longer possible to increase consumption further, neither in PCs, nor in laptops, and especially in wearable gadgets. And then it reached another absurdity - even coolers appeared in smartphones. They will increase performance, say, by 10 times (or even 20 times) in 10-15 years - this is nothing, if you look at the growth of performance per 1W since the 1960s in an exponential form. And no powerful neural networks (really useful in everyday life) at the level of "in your pocket" (locally, i.e. independently of anyone) of course will not happen. The same tectonic large-scale shifts in production technologies must occur, as they happened from the 60s of the last century to today. This is already beyond the 21st century, and in the best case scenario for humanity. Maybe today's babies will live to see this new tectonic shift, as the Internet and smartphones did for us...
 
Since the 80's people have been saying "We don't need that" when computers were advancing. Things like: "Any monitor greater than 14" is too big", "We don't need that much memory", "We don't need more than 16 colors", "We don't need anything more than a beeper speaker for sound output", "A 286 is more than anyone will need", "AGP ports are more than enough, (insert every other card slot standard since then)".

Just because we currently don't have the hardware/software to support an new standard, does not mean we should reject that new standard. I want standards groups to keep ahead of what is currently supported.
 
  • Like
Reactions: Li Ken-un and Chrys
Since the 80's people have been saying "We don't need that" when computers were advancing. Things like: "Any monitor greater than 14" is too big", "We don't need that much memory", "We don't need more than 16 colors", "We don't need anything more than a beeper speaker for sound output", "A 286 is more than anyone will need", "AGP ports are more than enough, (insert every other card slot standard since then)".

Just because we currently don't have the hardware/software to support an new standard, does not mean we should reject that new standard. I want standards groups to keep ahead of what is currently supported.
I have not heard a single competent person in the subject who would say such nonsense even in the 80s or 90s. Only businessmen who made money on this could say such things, but not those who needed all this cheaply.
 
Last edited:
But the marketing numbers must go up. People will buy 5.0 capable hardware, just because it says 5.0. The same will be true of 6.0 and 7.0.

In terms of cost, 6.0/7.0 4x lanes might actually let them simplify lower end GPUs. Right now 8x is mid-range, but 4x will likely be more so, especially for mobile GPUs.
 
Instead of pushing more data and faster they could devise a method to do it smarter. Each PCIe version increases prices and the benefit for regular user is very little.

afaik nothing even saturates a 5.0 let alone 6.0....and until they figure ways to counter thermals whats point of improving it even more??

Seems liek a pointless advancement thats just costlier to implement and make stuff mroe expensive for enduser

This isn't for you and me. Not for a long time. This is for servers. PCIe 6 is already starting to see use there. I don't expect PCIe 6 for consumers for a long time, let alone PCIe 7.
 
Instead of pushing more data and faster they could devise a method to do it smarter. Each PCIe version increases prices and the benefit for regular user is very little.
The rapid pace of PCIe development since 4.0 has been entirely driven by the datacenter. The highest bandwidth device most client users ever have is a GPU and with the 4090 we're just now seeing a use for more bandwidth than 16 lanes of PCIe 3.0 can deliver.
Remember that PCIe is not just "that thing I use for GPUs and sometimes storage drives".
PCIe is the underlying link for multiple other systems. This includes not only peripheral connections like UCIe, CXL, but also internal package and chip interconnects (e.g. AMD's Infinity Fabric, Intel's DMI, and the links between chipset cores and peripheral interfaces like USB and SATA controllers). All of these benefit from advancements of the PCIe bus, even if it never filters out to consumer components.
There's only a benefit if the higher speed interconnect is used. Intel and AMD are still using PCIe 4.0 for the chipsets with Intel having 8 lanes of bandwidth and AMD 4. Seeing as modern chipsets are basically fancy PCIe switches moving up to 5.0 would enable quite a bit so long as the bandwidth wasn't all wasted on providing PCIe 5.0 M.2 slots.
 
  • Like
Reactions: bit_user
With the current bandwidth even PCIe 5.0 can deliver the biggest issue I see on the client side is being able to use it.

Better bifurcation support would be a good first step and Intel with Z890 finally allows for an 8/4/4 split on the 16 CPU PCIe slot lanes (AMD has had 4/4/4/4 splits possible for years). This could actually be further split down to 2, but that's something that has only been seen at the enterprise level so far. The reason why someone at the client level might want a split like that would be storage. That would allow for 4x M.2 drives to have 2 lanes of PCIe 5.0 each on a relatively inexpensive add-in board without sacrificing the primary slot.

Upgrading interconnect to PCIe 5.0 from PCIe 4.0 would be the other big improvement as that allows for better expandability with fewer tradeoffs.
 
Last edited:
  • Like
Reactions: bit_user
This will be good for servers I guess. It might also be useful for high end motherboards to connect chipsets to the cpu and what not? But for the average PC users at home, I fathom most will be on PCIe 3.0 and 4.0 for quite awhile. My primary desktop still uses PCIe 3.0 because the system runs fast enough for me as it is. My GPU and NVMes are all 4.0 for that magical day when I finally buy a new motherboard, but will I actually notice the difference between a system built around PCIe 3.0 or 4.0, nope, they are already plenty fast as it is.
 
  • Like
Reactions: bit_user
afaik nothing even saturates a 5.0 let alone 6.0....and until they figure ways to counter thermals whats point of improving it even more??

Seems liek a pointless advancement thats just costlier to implement and make stuff mroe expensive for enduser
What makes you think spotty (grown up) kids and their 'gaming rigs' are even relevant to PCI SIG? Consumers are a blip, PCIe 6 & 7 are for server/data center/HPC devices, CXL interconnects, compostable systems, ultra high bandwidth accelerators etc., not bloody games.
 
Clearly, these developments are aimed at enterprise; most consumers would be perfectly well served by PCIe 3.

Incidentally, I ran a (IIRC) GTX 620 mated to a vanilla PCI (no "e") bus interface on a P4 to replace an old Radeon AGP card. Despite the slower bus, the Nvidia card was still more performant (though it was mildly amusing to observe Google Maps - of all things - choking the bus).
 
  • Like
Reactions: bit_user
I have not heard a single competent person in the subject who would say such nonsense even in the 80s or 90s. Only businessmen who made money on this could say such things, but not those who needed all this cheaply.
I haven't heard a single competent person say that about the need for pci7. PCI 7 is a next step and there is more than enough demand for next steps. They could jump 10x instead of 2x and people would be crying for more.
 
I am not sure what take to consider for this.... Either PCI-E group is way too ahead of the times (two and half generations to be exact) or all the hardware manufacturers are way way behind the schedule. :homer:

PCI-E 5 has just started to become somewhat common and still needs another year or two to be the minimum standard for every system and they are already at version 7? ¯\_(ツ)_/¯
 
You know what would solve a lot of problems with desktop mobo lack of PCIe lanes and an ever increasing pin count to add more lanes? a PLX switch.
You know who's bought out and cornered the market for that and priced it sky high? Avago/Broadcom.
 
  • Like
Reactions: bit_user
The article said:
PCIe 7.0 aims to double the limits set by PCIe 6.0 to 128GT/s raw bit rate, which translates to a bi-directional transfer speed of 512GB/s on a 16-lane or x16 configuration. It will also use Pulse Amplitude Modulation with 4 levels (PAM4) signaling introduced in PCIe 6.0, which allows it to encode two bits of data per clock cycle, effectively doubling the data rate versus the signaling tech used in PCIe 4.0 and PCIe 5.0.
There's a lot in this paragraph to discuss. First, I wish we could return to talking about uni-directional speeds. The reason this makes more sense is that most use cases are asymmetrical and thus bottlenecked by how fast you send data in just one direction. All you need to know about data moving in the other direction is merely that the interconnect is full-duplex, so you don't have to worry about it interfering with dataflow in the main direction of concern. I think Nvidia is one of the main perpetrators of getting us talking about aggregate, full-duplex speeds, as a cheap trick for getting bigger numbers in their press releases.

The other major point not addressed (but implied) is that because PCIe 7.0 also uses (only) PAM4 signalling, they had to return to clock-doubling for increasing bandwidth. When PCIe 6.0 was announced, the embracing of PAM4 seemed to indicate that PCIe's clock scaling might've reached its end. I guess they found a little more gas in the tank, but I still wouldn't bet on it continuing.

There is a SIG looking at optical interconnect technologies, at which point we might finally see a break in PCIe's legendary backwards compatibility. Once you move to optical signalling, some of the features in the PCIe protocol might no longer make much sense.

Finally, I did think it interesting that GDDR7 opted to use PAM3, coming in the wake of the proprietary GDDR6X scheme and its use of PAM4. I wonder to what extent this was an explicit rejection of PAM4, or just that PAM3 was enough to get the job done and cheaper to implement.
 
Last edited: