News PCIe 7.0 to Reach 512 GB/s, Arrive in 2025

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
It’s very interesting how much time PCIE 4.0 took in comparison with all the other versions, I’m curious why. Suffice to say, 4.0 is still pretty good for gamers, only upcoming GPUs will really need it to work at full speeds and only upcoming games could need PCIE 4.0 NVME drives, to run at highest loading speed involving DirectStorage.
 

InvalidError

Titan
Moderator
It’s very interesting how much time PCIE 4.0 took in comparison with all the other versions,
It is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.

It has nothing to do with gaming. Consumers get it simply because it gives AMD, Intel, board manufacturers, etc. one more reason to market the heck out of next-gen stuff and try to jack up prices some more along the way.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
It is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.

It has nothing to do with gaming. Consumers get it simply because it gives AMD, Intel, board manufacturers, etc. one more reason to market the heck out of next-gen stuff and try to jack up prices some more along the way.
While the first thing you said makes sense, the second doesn’t, you should consider really reading comments before replying to them, because I made a general comment about what is sufficient for a gamer, not what motivates companies to push for new standards and marketing.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
And saying it has “nothing” to do with gaming is false as well or at least a overstatement. It has to do with gaming to a lesser extent, as gamers profit from it, just later down the row.
 
And saying it has “nothing” to do with gaming is false as well or at least a overstatement. It has to do with gaming to a lesser extent, as gamers profit from it, just later down the row.
Gamers profit very little, relatively speaking, from these faster PCIe speeds. Yes, using a PCIe 4.0 graphics card in a PCIe 4.0 capable system can improve performance a bit, but it's very limited. I did a test a month or two back with RTX 3090 Ti running in Gen4 vs. Gen3 mode on my Alder Lake testbed. The result? 9% faster at 4K, 6% faster at 1440p, 2–4% faster at 1080p. Yes, that's enough to make it useful, but if we were to double that again to PCIe 5.0 speeds I suspect even 4K would only see a 2–3% increase.

Incidentally, I also tested with Gen2 speeds. Gen3 was about 6% faster at 1080p, 5% faster at 1440p, and 3% faster at 4K. So there's some margin of error stuff, and I also wasn't sure if changing the speed only affected the PCIe slots or if it was also changing the CPU to chipset speed. Considering PCIe 2.0 hasn't been the standard on PCs for about a decade, that's going pretty far back to only see a 12% increase in performance (from Gen2 to Gen4).
 
The enterprise market is what drives everything. Consumer products are generally "hand me downs" from that market!
That isn't necessarily a bad thing. It just means enterprise folk get to "beta test" all the latest and greatest first.

And saying it has “nothing” to do with gaming is false as well or at least a overstatement. It has to do with gaming to a lesser extent, as gamers profit from it, just later down the row.
And when will that be?

I've yet to see a game where PCIe 4.0 absolutely wrecks PCIe 3.0, except amusingly enough, when using AMD's lower end cards. And I've yet to see a game where NVMe SSDs absolutely wreck SATA SSDs.

While you could point out "but DirectStorage!" I doubt it'll have much traction any time soon.
 
  • Like
Reactions: LuxZg

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
Gamers profit very little, relatively speaking, from these faster PCIe speeds. Yes, using a PCIe 4.0 graphics card in a PCIe 4.0 capable system can improve performance a bit, but it's very limited. I did a test a month or two back with RTX 3090 Ti running in Gen4 vs. Gen3 mode on my Alder Lake testbed. The result? 9% faster at 4K, 6% faster at 1440p, 2–4% faster at 1080p. Yes, that's enough to make it useful, but if we were to double that again to PCIe 5.0 speeds I suspect even 4K would only see a 2–3% increase.

Incidentally, I also tested with Gen2 speeds. Gen3 was about 6% faster at 1080p, 5% faster at 1440p, and 3% faster at 4K. So there's some margin of error stuff, and I also wasn't sure if changing the speed only affected the PCIe slots or if it was also changing the CPU to chipset speed. Considering PCIe 2.0 hasn't been the standard on PCs for about a decade, that's going pretty far back to only see a 12% increase in performance (from Gen2 to Gen4).
I was specifically talking about the near future and also DirectStorage which isn’t a thing now.
And I've yet to see a game where NVMe SSDs
It already happened in a few games, maybe not wracked, but clearly faster. The big evolution will come with games that use DirectStorage, not really earlier. For GPUs you already need 4.0, unless you wanna waste 5% performance on current gen, which doesn’t make any sense if you bought a 3080 or higher. It means you’re someone who is willing to shell out top dollar for performance and not willing to accept losing 5-10% for the weaker bus. This gap will widen with new gens end of year. PCIE 3.0 is fine now, but soon not anymore.
 
It already happened in a few games, maybe not wracked, but clearly faster. The big evolution will come with games that use DirectStorage, not really earlier. For GPUs you already need 4.0, unless you wanna waste 5% performance on current gen, which doesn’t make any sense if you bought a 3080 or higher. It means you’re someone who is willing to shell out top dollar for performance and not willing to accept losing 5-10% for the weaker bus. This gap will widen with new gens end of year. PCIE 3.0 is fine now, but soon not anymore.
I don't see DirectStorage really being a thing anytime soon. Even the current generation of consoles, despite the companies touting their need for fast SSDs, haven't really done anything to show they absolutely need it.

With 256GB/s of PCIe bandwidth, UMA/tiny-VRAM graphics could climb many rungs up the performance ladder.
That's assuming said lower end cards use 16 lanes. AMD seems more than happy to chop off as many as they think they can get away with.

EDIT: On a side note, I find it amusing on relying on PCIe as a VRAM bus again since this was a thing: https://en.wikipedia.org/wiki/TurboCache
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
I don't see DirectStorage really being a thing anytime soon. Even the current generation of consoles, despite the companies touting their need for fast SSDs, haven't really done anything to show they absolutely need it.
Oh but it will, AMD is already using it as a ad for their refresh cards of 6050s, and as what it seems a ad for Windows 11 in general. Could be a few AAA games at the beginning until it’s more wide spread later. Thinking about 2023.
 

InvalidError

Titan
Moderator
That's assuming said lower end cards use 16 lanes. AMD seems more than happy to chop off as many as they think they can get away with.

EDIT: On a side note, I find it amusing on relying on PCIe as a VRAM bus again since this was a thing: https://en.wikipedia.org/wiki/TurboCache
A PCIe x16 interface is much cheaper than putting 8GB of VRAM on the GPU at current VRAM prices.

The TurboCache/HyperMemory versions of low-end GPUs were premium versions of their ultra-low-budget memory-less counterparts that relied entirely on system memory.
 
A PCIe x16 interface is much cheaper than putting 8GB of VRAM on the GPU at current VRAM prices.
Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures. So 256GB/sec would be nice if the GPU designer actually made it with 16 lanes in mind.

There's also the point that people looking at budget cards may not be on the latest and greatest mainboards. Like the 5500XT made no real sense to be on PCIe 4.0 because there were no budget PCIe 4.0 boards. And even then that likely would've required a hardware change for someone who was in the market for such.
 
  • Like
Reactions: digitalgriffin

InvalidError

Titan
Moderator
Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures.
I doubt AMD was quite that happy with the lambasting they got from reviewers. If you cut corners on memory size and it causes obvious performance problems, you have to make sure the GPU can offset the deficit elsewhere. Hopefully AMD learned its lesson.
 
  • Like
Reactions: digitalgriffin

escksu

Reputable
BANNED
Aug 8, 2019
878
354
5,260
Or a lot in the RX6500's case.

With 256GB/s of PCIe bandwidth, UMA/tiny-VRAM graphics could climb many rungs up the performance ladder.

I don't think that will happen. This is because main memory won't be able to keep up.

I would say the main benefit will be in notebook and ultra portables. Faster PCIE means fewer lanes needed which in turn reduces number of lanes required. This means fewer lanes required (space is a premium in laptops).
 

InvalidError

Titan
Moderator
I would say the main benefit will be in notebook and ultra portables. Faster PCIE means fewer lanes needed which in turn reduces number of lanes required. This means fewer lanes required (space is a premium in laptops).
Space is a non-issue: if you have two chips side by side, an x16 interface would easily fit in the dead space between the two. The only reason x16 takes any meaningful amount of space on the desktop is because the slot is 3X longer than the chips it connects, which wastes a ton of space fanning traces across the whole thing on both sides and keeping their lengths matched.

If board space of a PCIe x16 interface genuinely was an issue, then they'd have to trim the 128bits + controls DRAM interface which requires 160+ traces too, almost 3X as many as PCIe x16.
 

JWNoctis

Respectable
Jun 9, 2021
443
108
2,090
Space is a non-issue: if you have two chips side by side, an x16 interface would easily fit in the dead space between the two. The only reason x16 takes any meaningful amount of space on the desktop is because the slot is 3X longer than the chips it connects, which wastes a ton of space fanning traces across the whole thing on both sides and keeping their lengths matched.

If board space of a PCIe x16 interface genuinely was an issue, then they'd have to trim the 128bits + controls DRAM interface which requires 160+ traces too, almost 3X as many as PCIe x16.
Hard to say whether they'd still keep the current connector at that data rate. Those (the mechanical slots themselves, not the bus or the trace routing black magic) really haven't changed all that much since about forever, and changing or removing them could surely eliminate a lot of headache with stray capacitance/inductance, impedance matching, propagation delay, signal integrity, et cetera.

Though I guess that'd eliminate another recognizable bit from what would be expected as PC, and there aren't much left.
 
  • Like
Reactions: digitalgriffin

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
I still think AMD / nVIDIA needs to BiFuricate the PCIe x16 into a x12 / x4 setup and have directly attached Gaming Storage on the back of the Video Card for ultra low latency transfer between the storage drive and the GPU.

Kind of like a simplified Radeon SSG, but mix in Direct Storage and not require the game dev's to program for it.
 
Last edited:

JOSHSKORN

Distinguished
Oct 26, 2009
2,394
19
19,795
But will it be 2030 before a GPU is manufactured that'll actually throttle PCIe 7.0? Moreover, who would need such a GPU? Maybe One PC, One household and a bunch of thin-clients, that is, if people actually know how to set that up, which is doubtful.
 

Alpha_Lyrae

Commendable
Nov 13, 2021
18
15
1,515
Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures. So 256GB/sec would be nice if the GPU designer actually made it with 16 lanes in mind.

There's also the point that people looking at budget cards may not be on the latest and greatest mainboards. Like the 5500XT made no real sense to be on PCIe 4.0 because there were no budget PCIe 4.0 boards. And even then that likely would've required a hardware change for someone who was in the market for such.

It has more to do with optimizing silicon sizes and using only what the GPU needs, especially as leading-edge gets more and more expensive; you can't afford to waste any die area and with less analog PHYs to power up, you do save power. PCIe PHYs are analog circuits, so they don't get logic shrinks like GPU proper. Roughly only 20% reduction from N7 to N5, instead of about 84% reduction for digital logic*. That's quite a bit of space that can go into other things.

*Source: https://en.wikichip.org/wiki/5_nm_lithography_process

It's a more nuanced decision than just cost-cutting, though that does have a large influence given market targets for such products. Plus, the 6500XT is very clearly a laptop product, as APUs carry the video codecs to prevent dGPU from powering up and eating battery. Still, some decisions were questionable. It shouldn't be a desktop part at all.
 
honestly 3 yrs for ne w 1 seem unsustainable.

3yrs u get new version, by 5 yrs u get devices using last 1 but stuff using it is expensive, by time its cheap enoguh to be widespread by many ppl next version is out.

and we have no real world benefit after a certain point.

even newest gpu's/ssd won't truly use 5.0.

technology can make stuff, but making stuff to use said technology takes much longer.
 
It is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.

It has nothing to do with gaming. Consumers get it simply because it gives AMD, Intel, board manufacturers, etc. one more reason to market the heck out of next-gen stuff and try to jack up prices some more along the way.
GG
 
Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures. So 256GB/sec would be nice if the GPU designer actually made it with 16 lanes in mind.

There's also the point that people looking at budget cards may not be on the latest and greatest mainboards. Like the 5500XT made no real sense to be on PCIe 4.0 because there were no budget PCIe 4.0 boards. And even then that likely would've required a hardware change for someone who was in the market for such.
poorly implemented multi generational product segmentation is the only reason they kneecapped that card.

if the 6500xt had 128bit mem bus and 8 lanes of pcie it would be a different class of card altogether...Im making wild assumptions here, but it wouldn't b the slag it is today,instead forcing users to spend more to get a superior product to the 580/5500xt which both sadly outperform this 6500xt, when the 5600/5700 cards were stuck in the mines.

I really wanted a 5600xt btw but when it caame time to upgrade there was another crypto(virus)