The PCI-SIG standards committee announced that PCIe 7.0 will provide up to 512 GBps of bandwidth when it arrives in 2025.
PCIe 7.0 to Reach 512 GB/s, Arrive in 2025 : Read more
PCIe 7.0 to Reach 512 GB/s, Arrive in 2025 : Read more
It is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.It’s very interesting how much time PCIE 4.0 took in comparison with all the other versions,
While the first thing you said makes sense, the second doesn’t, you should consider really reading comments before replying to them, because I made a general comment about what is sufficient for a gamer, not what motivates companies to push for new standards and marketing.It is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.
It has nothing to do with gaming. Consumers get it simply because it gives AMD, Intel, board manufacturers, etc. one more reason to market the heck out of next-gen stuff and try to jack up prices some more along the way.
It has nothing to do with gaming.
Gamers profit very little, relatively speaking, from these faster PCIe speeds. Yes, using a PCIe 4.0 graphics card in a PCIe 4.0 capable system can improve performance a bit, but it's very limited. I did a test a month or two back with RTX 3090 Ti running in Gen4 vs. Gen3 mode on my Alder Lake testbed. The result? 9% faster at 4K, 6% faster at 1440p, 2–4% faster at 1080p. Yes, that's enough to make it useful, but if we were to double that again to PCIe 5.0 speeds I suspect even 4K would only see a 2–3% increase.And saying it has “nothing” to do with gaming is false as well or at least a overstatement. It has to do with gaming to a lesser extent, as gamers profit from it, just later down the row.
That isn't necessarily a bad thing. It just means enterprise folk get to "beta test" all the latest and greatest first.The enterprise market is what drives everything. Consumer products are generally "hand me downs" from that market!
And when will that be?And saying it has “nothing” to do with gaming is false as well or at least a overstatement. It has to do with gaming to a lesser extent, as gamers profit from it, just later down the row.
I was specifically talking about the near future and also DirectStorage which isn’t a thing now.Gamers profit very little, relatively speaking, from these faster PCIe speeds. Yes, using a PCIe 4.0 graphics card in a PCIe 4.0 capable system can improve performance a bit, but it's very limited. I did a test a month or two back with RTX 3090 Ti running in Gen4 vs. Gen3 mode on my Alder Lake testbed. The result? 9% faster at 4K, 6% faster at 1440p, 2–4% faster at 1080p. Yes, that's enough to make it useful, but if we were to double that again to PCIe 5.0 speeds I suspect even 4K would only see a 2–3% increase.
Incidentally, I also tested with Gen2 speeds. Gen3 was about 6% faster at 1080p, 5% faster at 1440p, and 3% faster at 4K. So there's some margin of error stuff, and I also wasn't sure if changing the speed only affected the PCIe slots or if it was also changing the CPU to chipset speed. Considering PCIe 2.0 hasn't been the standard on PCs for about a decade, that's going pretty far back to only see a 12% increase in performance (from Gen2 to Gen4).
It already happened in a few games, maybe not wracked, but clearly faster. The big evolution will come with games that use DirectStorage, not really earlier. For GPUs you already need 4.0, unless you wanna waste 5% performance on current gen, which doesn’t make any sense if you bought a 3080 or higher. It means you’re someone who is willing to shell out top dollar for performance and not willing to accept losing 5-10% for the weaker bus. This gap will widen with new gens end of year. PCIE 3.0 is fine now, but soon not anymore.And I've yet to see a game where NVMe SSDs
Or a lot in the RX6500's case.Gamers profit very little, relatively speaking, from these faster PCIe speeds. Yes, using a PCIe 4.0 graphics card in a PCIe 4.0 capable system can improve performance a bit
I don't see DirectStorage really being a thing anytime soon. Even the current generation of consoles, despite the companies touting their need for fast SSDs, haven't really done anything to show they absolutely need it.It already happened in a few games, maybe not wracked, but clearly faster. The big evolution will come with games that use DirectStorage, not really earlier. For GPUs you already need 4.0, unless you wanna waste 5% performance on current gen, which doesn’t make any sense if you bought a 3080 or higher. It means you’re someone who is willing to shell out top dollar for performance and not willing to accept losing 5-10% for the weaker bus. This gap will widen with new gens end of year. PCIE 3.0 is fine now, but soon not anymore.
That's assuming said lower end cards use 16 lanes. AMD seems more than happy to chop off as many as they think they can get away with.With 256GB/s of PCIe bandwidth, UMA/tiny-VRAM graphics could climb many rungs up the performance ladder.
Oh but it will, AMD is already using it as a ad for their refresh cards of 6050s, and as what it seems a ad for Windows 11 in general. Could be a few AAA games at the beginning until it’s more wide spread later. Thinking about 2023.I don't see DirectStorage really being a thing anytime soon. Even the current generation of consoles, despite the companies touting their need for fast SSDs, haven't really done anything to show they absolutely need it.
Still waiting for that inexpensive 10gb ethernet hand me down...Say it louder for those in the back.
The enterprise market is what drives everything. Consumer products are generally "hand me downs" from that market!
A PCIe x16 interface is much cheaper than putting 8GB of VRAM on the GPU at current VRAM prices.That's assuming said lower end cards use 16 lanes. AMD seems more than happy to chop off as many as they think they can get away with.
EDIT: On a side note, I find it amusing on relying on PCIe as a VRAM bus again since this was a thing: https://en.wikipedia.org/wiki/TurboCache
Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures. So 256GB/sec would be nice if the GPU designer actually made it with 16 lanes in mind.A PCIe x16 interface is much cheaper than putting 8GB of VRAM on the GPU at current VRAM prices.
I doubt AMD was quite that happy with the lambasting they got from reviewers. If you cut corners on memory size and it causes obvious performance problems, you have to make sure the GPU can offset the deficit elsewhere. Hopefully AMD learned its lesson.Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures.
Or a lot in the RX6500's case.
With 256GB/s of PCIe bandwidth, UMA/tiny-VRAM graphics could climb many rungs up the performance ladder.
Space is a non-issue: if you have two chips side by side, an x16 interface would easily fit in the dead space between the two. The only reason x16 takes any meaningful amount of space on the desktop is because the slot is 3X longer than the chips it connects, which wastes a ton of space fanning traces across the whole thing on both sides and keeping their lengths matched.I would say the main benefit will be in notebook and ultra portables. Faster PCIE means fewer lanes needed which in turn reduces number of lanes required. This means fewer lanes required (space is a premium in laptops).
Hard to say whether they'd still keep the current connector at that data rate. Those (the mechanical slots themselves, not the bus or the trace routing black magic) really haven't changed all that much since about forever, and changing or removing them could surely eliminate a lot of headache with stray capacitance/inductance, impedance matching, propagation delay, signal integrity, et cetera.Space is a non-issue: if you have two chips side by side, an x16 interface would easily fit in the dead space between the two. The only reason x16 takes any meaningful amount of space on the desktop is because the slot is 3X longer than the chips it connects, which wastes a ton of space fanning traces across the whole thing on both sides and keeping their lengths matched.
If board space of a PCIe x16 interface genuinely was an issue, then they'd have to trim the 128bits + controls DRAM interface which requires 160+ traces too, almost 3X as many as PCIe x16.
Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures. So 256GB/sec would be nice if the GPU designer actually made it with 16 lanes in mind.
There's also the point that people looking at budget cards may not be on the latest and greatest mainboards. Like the 5500XT made no real sense to be on PCIe 4.0 because there were no budget PCIe 4.0 boards. And even then that likely would've required a hardware change for someone who was in the market for such.
GGIt is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.
It has nothing to do with gaming. Consumers get it simply because it gives AMD, Intel, board manufacturers, etc. one more reason to market the heck out of next-gen stuff and try to jack up prices some more along the way.
poorly implemented multi generational product segmentation is the only reason they kneecapped that card.Well again, AMD seems more than happy to remove lanes for no real reason other than maybe cost cutting measures. So 256GB/sec would be nice if the GPU designer actually made it with 16 lanes in mind.
There's also the point that people looking at budget cards may not be on the latest and greatest mainboards. Like the 5500XT made no real sense to be on PCIe 4.0 because there were no budget PCIe 4.0 boards. And even then that likely would've required a hardware change for someone who was in the market for such.