News PCI-SIG announces PCIe 8.0 spec with twice the bandwidth — 1TB/s of peak bandwidth, 256 GT/s per lane, and a possible new connector

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Seeing it widely adopted around 2040 or maybe later?
My guess is that desktops will probably implement CXL 3.0 / PCIe 6.0 around 2028-29, in order to utilize CXL.mem modules for expansion, since on-package LPDDR6X will be mainstream by then. However, I'm not predicting x16 slots will be PCIe 6.0 - just M.2 slots or similar, and maybe just two x4 slots, for starters.

After that, I think consumer platforms will switch over to whatever optical standard comes next. Mainly for power reasons, and because implementing PCIe 7.0 electrically could be at the cross-over point, on the cost curve, with maturing optical technologies by then.
 
  • Like
Reactions: thisisaname
I posted the google ai results with no research.
The 1-4% was from https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-vs-30-x16-scaling-benchmarks
And this just highlights what a bad idea it is to do that! Because, what they actually said was:
"1-4% is the difference we observed between PCIe Gen5 vs PCIe Gen 3."​

So, it was not a differential between 5.0 vs. 4.0, but rather between 5.0 and 3.0!

Furthermore, they tested only 4 different games, with one in both RT & Raster. TPU tested 25 different games with 9 of them also run in RT mode. It's quite obvious to see that the TPU dataset is far superior to GN's testing.

Finally, Gamers Nexus falls short on analysis. Citing a range of 1-4% makes it sound like you're as likely to see a hit of 4% as you are 1%. However, when we crunch their numbers, what we actually get is:

4k Results
Comparison
Mean​
1 %​
0.1 %​
5.0 vs 4.0
100.31%​
102.01%​
99.98%​
4.0 vs 3.0
101.63%​
102.15%​
103.85%​
5.0 vs 3.0
101.94%​
104.20%​
103.83%​

So, at 4k, what their meager 5 data points showed was a Mean FPS benefit of 0.3% between PCIe 5.0 vs. 4.0, and 1.9% between PCIe 5.0 and 3.0.

Now, I went ahead and tabulated their data for 1440p and 1080p, but because table copy-paste in the current forums software is semi-broken and requires manual fix-ups, I'll just summarize the Mean FPS deltas:
  • in 1440p, their 3 data points showed 0.5% benefit between PCIe 5.0 and 4.0, and 1.9% between PCIe 5.0 and 3.0.
  • in 1080p, their 3 data points showed 1.1% benefit between PCIe 5.0 and 4.0, and 3.8% between PCIe 5.0 and 3.0.

This puts it well in line with TPU's findings.

Oh!! But, you did not say 8 GB!! That's a special case, because what's happening there is people crank up the settings to the point where the GDDR memory on the card fills up and it starts having to page in lots of assets over PCIe. I regard this as cheating, due to the amount of stutters you experience, even with PCIe 5.0.

If you actually tried to play this way, you'd quickly dial back your settings to the point where it no longer stuttered. At that point, the difference between PCIe 5.0 and 4.0 should be much more in line with TechPowerUp's testing of the 16 GB 5060 Ti, which is 1-2%.
 
  • Like
Reactions: thisisaname
I could see a situation where PCI-SIG potentially goes with optical for the data portion of PCIe.
My hope is that embedding optical interconnects in motherboards will be well-refined, by the point that it makes sense to even think about going beyond PCIe 6.0 in client platforms.

from what I understand 6.0s new signaling largely just means is no more difficult to do right than 5.0.
While PAM4 lets double the data rate at the same clock speed, the tradeoff is that you need a better signal-to-noise ratio. So, it's not a free lunch. Plus, the SERDES become more expensive, because now you effectively need to build in a high speed DAC/ADC.
 
  • Like
Reactions: thestryker
And this just highlights what a bad idea it is to do that! Because, what they actually said was:
"1-4% is the difference we observed between PCIe Gen5 vs PCIe Gen 3."​

So, it was not a differential between 5.0 vs. 4.0, but rather between 5.0 and 3.0!

Furthermore, they tested only 4 different games, with one in both RT & Raster. TPU tested 25 different games with 9 of them also run in RT mode. It's quite obvious to see that the TPU dataset is far superior to GN's testing.

Finally, Gamers Nexus falls short on analysis. Citing a range of 1-4% makes it sound like you're as likely to see a hit of 4% as you are 1%. However, when we crunch their numbers, what we actually get is:
4k Results
Comparison
Mean​
1 %​
0.1 %​
5.0 vs 4.0
100.31%​
102.01%​
99.98%​
4.0 vs 3.0
101.63%​
102.15%​
103.85%​
5.0 vs 3.0
101.94%​
104.20%​
103.83%​


So, at 4k, what their meager 5 data points showed was a Mean FPS benefit of 0.3% between PCIe 5.0 vs. 4.0, and 1.9% between PCIe 5.0 and 3.0.

Now, I went ahead and tabulated their data for 1440p and 1080p, but because table copy-paste in the current forums software is semi-broken and requires manual fix-ups, I'll just summarize the Mean FPS deltas:
  • in 1440p, their 3 data points showed 0.5% benefit between PCIe 5.0 and 4.0, and 1.9% between PCIe 5.0 and 3.0.
  • in 1080p, their 3 data points showed 1.1% benefit between PCIe 5.0 and 4.0, and 3.8% between PCIe 5.0 and 3.0.

This puts it well in line with TPU's findings.


Oh!! But, you did not say 8 GB!! That's a special case, because what's happening there is people crank up the settings to the point where the GDDR memory on the card fills up and it starts having to page in lots of assets over PCIe. I regard this as cheating, due to the amount of stutters you experience, even with PCIe 5.0.

If you actually tried to play this way, you'd quickly dial back your settings to the point where it no longer stuttered. At that point, the difference between PCIe 5.0 and 4.0 should be much more in line with TechPowerUp's testing of the 16 GB 5060 Ti, which is 1-2%.
Or more simply PCIe version does not matter enough (no one is going to notice 4% framerate drop) until you need to go to main memory for some data. :)
 
no one is going to notice 4% framerate drop
Ugh. The whole point of my post is that it's not even 4% !! PCIe 5.0 -> 4.0 is 1%, in most cases. For the RTX 5060 Ti, TPU found 2% at 1440p.

Why do I even bother?
: (

As for whether anyone notices, I think that won't resolve the issue, for some. The way to tune a system for highest FPS (for those who really care about that stuff) is by stacking up lots of small gains. As soon as you stop caring about gains of a few % here, a few % there, you're no longer talking about max fps. In fact, the only people I'd say need to worry about running PCIe 5.0 x16 are the ones sparing no expense to chase after max FPS.
 
Last edited:
  • Like
Reactions: thisisaname
Ugh. The whole point of my post is that it's not even 4% !! PCIe 5.0 -> 4.0 is 1%, in most cases. For the RTX 5060 Ti, TPU found 2% at 1440p.

Why do I even bother?
: (

As for whether anyone notices, I think that won't resolve the issue, for some. The way to tune a system for highest FPS (for those who really care about that stuff) is by stacking up lots of small gains. As soon as you stop caring about gains of a few % here, a few % there, you're no longer talking about max fps. In fact, the only people I'd say need to worry about running PCIe 5.0 x16 are the ones sparing no expense to chase after max FPS.
Kind of my point too at worse it was only 4%. PCIe version does not matter 5 to 3 is the worse case and even then only 4% at worse.

The losses due to PCIe version is so low that most people are not going to notice.

Now if they only cared about frame rate they would just drop resolution :)
If they want both high FPS and Resolution then sure it matters. A tiny % here and there adds up, but the cost of them % gains will add up quicker than the FPS gains. :)
 
PCIe version does not matter
I wouldn't go that far. I'd say 5.0 -> 4.0 (or 5.0 @ x8) doesn't matter to anyone but extreme framerate chasers. Going a step further, especially when you look at some outlier games, might rise to the level of significance. Not so much that the difference will be immediately perceivable, but I can see a reasonable objection being made by someone faced with going down two steps.

the cost of them % gains will add up quicker than the FPS gains. :)
IMO, that's the definition of high-end: when you're willing to spend increasing amounts for diminishing gains.
 
Last edited:
  • Like
Reactions: thisisaname