News PCI Express 7.0 Draft Spec Released: 512 GB/s x16 Slot in 2027

D

Deleted member 2731765

Guest
So based on the paper specs, even x1 PCIe 7.0 lane will be as fast as PCIe 4.0 x16 speeds (32 GB/s), so that most of the storage and other devices could be smaller and hog fewer resources.

Or, we could have four NVME SSD drive slots with each using just 1 PCIe lane, and all of them would be twice as fast as a PCIe5 x4 NVMe slot. :smiley:
 

InvalidError

Titan
Moderator
So based on the paper specs, even x1 PCIe 7.0 lane will be as fast as PCIe 4.0 x16 speeds (32 GB/s), so that most of the storage and other devices could be smaller and hog fewer resources.
I doubt that stuff is going to make it to the consumer space, at least not in AIB format. 128Gbps per pin is going to take some deep voodoo to get to work through board-to-board/chip sockets. It may get into the prosumer space and possibly get used as the off-the-shelf link between chiplets/tiles/whatever on multi-chip packages in the consumer space.

The TDP on the first few generations of these things is going to be brutal if the fist-gen 3.0-to-4.0 and 4.0-to-5.0 step as it currently stands are anything to go by.
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
So based on the paper specs, even x1 PCIe 7.0 lane will be as fast as PCIe 4.0 x16 speeds (32 GB/s),
No, it's just an 8:1 ratio (2^3). If you look closely, you can see that each generation roughly doubles the previous one.

I think what's confusing you is that they previously (?) specified unidirectional bandwidth and now they're summing them. I don't know why the change, unless it's to compete with Nvidia's messaging around NVLink, for instance.

It's rubbish, though. Because many I/O applications are asymmetrical in their bandwidth. For instance, games are overwhelmingly biased in the direction of sending data to the graphics card.

Look closely at the top of this chart:

xwG5Hra6p2XELxkyqrNKo.png


My reference point has long been: PCIe 3.0 x1 = ~1 GB/s. Then, just double or halve your way from that.
 
Last edited:
  • Like
Reactions: TJ Hooker

bit_user

Titan
Ambassador
While we are sure that PCIe 7.0 will eventually end up in client PCs ...
I wouldn't be.

We saw backlash at the price increases of new motherboards, when Intel and AMD introduced PCIe 5.0 support. Given that PCIe 5.0 is still overkill for desktop PCs, I can't imagine Intel or AMD would want to go through another round of such complaints, for a feature with virtually no practical benefits to end users.

BTW, this is just a rehashing of what they already published an entire year ago:


The press release they put out today was about "PCIe Technology TAM Expected to Reach $10 Billion by 2027", and only mentions PCIe 7.0 in passing.

There's no actual news about PCIe 7.0, today. You've been click-baited.
 
Last edited:

InvalidError

Titan
Moderator
Given that PCIe 5.0 is still overkill for desktop PCs, I can't imagine Intel or AMD would want to go through another round of such complaints, for a feature with virtually no practical benefits to end users.
There may be a benefit for the top-3%, though I doubt anything after 5.0 will go very far into the mainstream beyond the CPU-chipset link where the extra bandwidth could be necessary to feed downstream 5.0 devices with minimal uplink contention.

I'm just impressed that they've managed to push a ~25 years old physical spec to from 1.3GHz to 60+GHz and get that to work reliably enough to be practical. That deep into microwave territory, signal integrity through the connector would likely benefit from a new interface with 2-3X the data pin pitch density.
 

bit_user

Titan
Ambassador
There may be a benefit for the top-3%,
There are still no GPUs with PCIe 5.0 (not that I care - they don't need it).

Yes, we finally got some PCIe 5.0 SSDs, in recent months, but they're hot, have enormous heatsinks, and I remain to be convinced they deliver user-perceivable benefits over fast PCIe 4.0 drives.

I doubt anything after 5.0 will go very far into the mainstream beyond the CPU-chipset link where the extra bandwidth could be necessary to feed downstream 5.0 devices with minimal uplink contention.
The irony of ironies is that one place where PCIe 5.0 could've actually delivered real value isn't something either Intel or AMD used it for!

I'm just impressed that they've managed to push a ~25 years old physical spec to from 1.3GHz to 60+GHz and get that to work reliably enough to be practical.
Let's save the fanfare until PCIe 7.0 is finalized.

Regardless, I think we're pretty close to the crossover point for optical, as a system interconnect medium*.

* For servers.
 

InvalidError

Titan
Moderator
tbh for normal users this seems like just to increase price of MB's.

even best gpu/ssd cant saturate 5.0
You don't need to "saturate" it. All you need to have an IO bottleneck with macroscopically observable symptoms is a couple of things attempting to do large IO at the same time, causing latency spikes from queue length spikes. You may only need 1% of the bandwidth on average but you still feel the lag when everything happens everywhere all at once every few seconds. The more spare bandwidth you have, the less likely that worst-case scenario is to happen and the less severe it gets.

There are still no GPUs with PCIe 5.0 (not that I care - they don't need it).
To which I'll respond with the same thing as always: low-end GPUs stand to benefit the most from fast PCIe to give them faster access to system memory and offset their limited local memory pool. Ironically, GPU manufacturers won't give budget buyers that either.
 

deesider

Distinguished
Jun 15, 2017
308
147
18,890
There are still no GPUs with PCIe 5.0 (not that I care - they don't need it).

Yes, we finally got some PCIe 5.0 SSDs, in recent months, but they're hot, have enormous heatsinks, and I remain to be convinced they deliver user-perceivable benefits over fast PCIe 4.0 drives.


The irony of ironies is that one place where PCIe 5.0 could've actually delivered real value isn't something either Intel or AMD used it for!


Let's save the fanfare until PCIe 7.0 is finalized.

Regardless, I think we're pretty close to the crossover point for optical, as a system interconnect medium*.

* For servers.
I look forward to seeing optical lanes integrated into motherboards. There must be a point where it becomes cheaper than maintaining the integrity of so many tightly packed copper lanes.
 
  • Like
Reactions: bit_user
To which I'll respond with the same thing as always: low-end GPUs stand to benefit the most from fast PCIe to give them faster access to system memory and offset their limited local memory pool. Ironically, GPU manufacturers won't give budget buyers that either.
But people in the market for lower-end GPUs don't usually have motherboards with bleeding edge I/O.

Like the RX 5500 4GB card. How many people who wanted that card when it came out had a PCIe 4.0 board where running out of VRAM wouldn't hurt performance as much?
 
  • Like
Reactions: bit_user

InvalidError

Titan
Moderator
But people in the market for lower-end GPUs don't usually have motherboards with bleeding edge I/O.
What is "bleeding-edge" IO today will be standard even on lower-end platforms and devices some number of years down the line. The biggest kicker for people on such older platforms is that all recent entry-level GPUs have only an x8 interface, making low-VRAM boards (ex.: 4GB RX5500) almost unworkable on them.
 
What is "bleeding-edge" IO today will be standard even on lower-end platforms and devices some number of years down the line. The biggest kicker for people on such older platforms is that all recent entry-level GPUs have only an x8 interface, making low-VRAM boards (ex.: 4GB RX5500) almost unworkable on them.
x8 only and requiring the bleeding edge IO to not suffer that performance hit. Lower end hardware should stick with lower end requirements.

But sure, if we're talking about motherboards, I don't really care about adopting bleeding edge IO.
 

bit_user

Titan
Ambassador
You don't need to "saturate" it. All you need to have an IO bottleneck with macroscopically observable symptoms ...
I think what @hotaru251 meant was that the peak speeds of PCIe 5.0 SSDs can't get close to the max of PCIe 5.0 x4 speeds. If they can't get near it at peak speeds, then it's not going to do you a lot of good to go above PCIe 4.0.

is a couple of things attempting to do large IO at the same time, causing latency spikes from queue length spikes. You may only need 1% of the bandwidth on average but you still feel the lag when everything happens everywhere all at once every few seconds. The more spare bandwidth you have, the less likely that worst-case scenario is to happen and the less severe it gets.
You seem to have the wrong idea about where the bottlenecks are. If you're worried about tail latencies, you're still barking up the wrong tree. Some Datacenter drives are all about that. Here's a PCIe 4.0 drive with consistently low tail latencies under conditions that would make most consumer PCIe 5.0 drives weep.


A couple months ago, someone in one of these threads was claiming they tried to use a Samsung 970 Pro for a database workload, and that it would occasionally exhibit hiccups where it became extremely slow for several seconds, IIRC. That's why you pay the extra $$$ for proper datacenter hardware designed to maintain sub-millisecond 99.99th percentile latency.


To which I'll respond with the same thing as always: low-end GPUs stand to benefit the most from fast PCIe to give them faster access to system memory and offset their limited local memory pool. Ironically, GPU manufacturers won't give budget buyers that either.
You're looking at a bathtub curve, with this sort of thing. To a point, faster PCIe will help. However, if the card is swapping in stuff above a certain rate, what's going to happen is it'll chew up both system memory bandwidth and the card's own internal memory bandwidth, which low-end cards don't have in too much abundance.

If AMD cared, they could've given the RX 6500XT a x8 PCIe interface and that probably would've been plenty, at PCIe 4.0 speeds. The cases where its performance really tanked were people running with high-res textures on PCIe 3.0 motherboards.
 

bit_user

Titan
Ambassador
What is "bleeding-edge" IO today will be standard even on lower-end platforms and devices some number of years down the line.
Even PCIe 4.0 motherboards require retimers, from what I've read. That means the lowest-end motherboards might never have it. Just look at the newly-released Alder Lake-N - the highest-spec variant has only PCIe 3.0:


The biggest kicker for people on such older platforms is that all recent entry-level GPUs have only an x8 interface, making low-VRAM boards (ex.: 4GB RX5500) almost unworkable on them.
I think you've fallen victim to sensationalist youtubers or something, because even a 4 GB card is supposedly fine @ PCIe 3.0, if you keep texture details down so it doesn't run out of VRAM.
 

InvalidError

Titan
Moderator
I think you've fallen victim to sensationalist youtubers or something, because even a 4 GB card is supposedly fine @ PCIe 3.0, if you keep texture details down so it doesn't run out of VRAM.
I'm still using a GTX1050 with 2GB of RAM... pretty sure the 3.0x16 is the only thing keeping it remotely usable for games today since practically everything will use more than 2GB even at lowest settings.
 
D

Deleted member 2838871

Guest
We saw backlash at the price increases of new motherboards, when Intel and AMD introduced PCIe 5.0 support. Given that PCIe 5.0 is still overkill for desktop PCs, I can't imagine Intel or AMD would want to go through another round of such complaints, for a feature with virtually no practical benefits to end users.

Yep... things are getting pretty ridiculous now with hardware. Reminds me of cell phones a few years back when I quit upgrading every year. New phones have nothing to offer except an upgraded camera that will take a photo that doesn't look any better than the photo taken with your old phone. :ROFLMAO:

Can I tell a difference in speed between my 3.0 SSDs and 4.0 SSDs in everyday use? Nope. I can if I'm copying some massive 50GB file but I never do that.

No need for PCIe 5.0 in any way shape or form in my SSDs or my GPU.
 
  • Like
Reactions: bit_user
D

Deleted member 2838871

Guest
For now.

Between main PC, living room PC, backup PC, etc., I keep my PCs for ~20 years. Stuff I may not need today usually comes in handy 5-10 years down the line.

Oh no doubt. It will be interesting to see how long I keep this one as is. Can't say I'll keep a PC 20 years... or even 10... my personal record is 4 1/2 years... 7700k, 1080 Ti, 32MB DDR4 with 4TB in SSDs.

I can see myself possibly upgrading to an 8000 series CPU (or maybe 9000 series) which should be an easy AM5 swap... beyond that I don't see much else changing. 4090 should last a long long time.