News PCIe 7.0 to Reach 512 GB/s, Arrive in 2025

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
Hard to say whether they'd still keep the current connector at that data rate.
The PCIe slot is a bit overdue for an upgrade. Mechanically, it could use at least one extra key at x8 for reinforcement if not every x4 so board manufacturers wouldn't need to put steel cages around connectors to hold them together. It would also be a great opportunity to take another step towards 12VO and bumping slot power to 150-200W on x16 slots by bookending the x16 data area with a second 12VO power tab. After all, if you need 128+GB/s of IO bandwidth, you almost certainly need more than 75W.
 
  • Like
Reactions: JWNoctis
Or a lot in the RX6500's case.

With 256GB/s of PCIe bandwidth, UMA/tiny-VRAM graphics could climb many rungs up the performance ladder.
Even the 6500 XT is only like 15% on average IIRC. I did see a 28% gain in Borderlands 3, but other games were low single digits. And really, when you're in a situation where PCIe bandwidth matters that much, you're already usually running settings that the card can't handle.

While a Gen7 PCIe interface will do 256GB/s on x16, let's also be realistic: by the time we have consumer hardware with Gen7 support, and budget cards with Gen7 support, I suspect we'll be looking at 16GB for midrange and higher GPUs, and the cost of 8GB of memory will be relatively small. We'll probably be doing GDDR7 or maybe even GDDR8 by then, which means a 128-bit interface might be running at something obscene by today's standards, like 35-40Gbps, which would be 560-640 GB/s -- more than double the PCIe speed yet again. Also, the cost of a motherboard and graphics card that can both support Gen7 speeds will not be trivial.

So you're suggesting people will buy a high-end motherboard, only to pair it with a crappy graphics card that maybe will make use of the added PCIe bandwidth. More likely will be that the future "RX 9500 XT" with a 64-bit memory interface will also have an x4 Gen6 PCIe interface, just to save costs.
 
  • Like
Reactions: LuxZg

Makaveli

Splendid
Still waiting for that inexpensive 10gb ethernet hand me down...

Most are but how many people actually have access to 10gbps Wan connections at home?

Even my fiber ISP only offers 3gbps connections as the fastest you can get as a home user currently and most other people are stuck on cable with even slower speeds. When there is a market for it you will see the products!
 

kanewolf

Titan
Moderator
It is very simple: even the server space had limited use for anything faster than 3.0 until SSDs came along. Now that SSDs are getting bigger and faster, effectively eliminating the biggest bottleneck, the server market needs massively more IO bandwidth to optimize the amount of storage and IOPS per system.
100GE to the host, to get the data onto and off each host is as big an issue as the storage.
 

InvalidError

Titan
Moderator
While a Gen7 PCIe interface will do 256GB/s on x16, let's also be realistic: by the time we have consumer hardware with Gen7 support, and budget cards with Gen7 support, I suspect we'll be looking at 16GB for midrange and higher GPUs, and the cost of 8GB of memory will be relatively small.
How much did 32GB of decent DDR3 cost in 2012? How much does 32GB of contemporary decent DDR4 cost? About $140 in both cases. The cost per GB hasn't changed much if at all in the last 10 years apart from the DRAM price bubble where everything more than doubled.

Most of the fab progress that was made and should have lowered DRAM prices over time got countered by architecture and manufacturing tweaks to push lower operating voltages, higher memory clocks, reduced latencies, increased concurrency by splitting DRAM chips into more banks, etc. to the point that we have near-zero net change. DDR5 may never reach price parity with DDR4's lows due to the on-DIMM buffer chip and VRM. On the GDDR6X/7+ front, needing a full-blown analog front-end to decode multi-level signals at 15+GHz isn't exactly cheap, so I wouldn't expect large amount of that landing on budget-conscious GPUs any time soon.

100GE to the host, to get the data onto and off each host is as big an issue as the storage.
What data? Before SSDs became sufficiently cost-effective for servers' primary storage, a 32-drives array would barely keep up with dual 10GbE for anything other than sequential transfers.

The need for 100+GbE in servers arose primarily from servers getting kitted out either with NVMe SSDs or larger SAS SSD arrays.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
if the 6500xt had 128bit mem bus and 8 lanes of pcie it would be a different class of card altogether...Im making wild assumptions here, but it wouldn't b the slag it is today,instead forcing users to spend more to get a superior product to the 580/5500xt which both sadly outperform this 6500xt,
Not really, the 6500XTs first and foremost problem is 4 GB vram and that’s it. To a lesser extent the 4x bus if you don’t have PCIE gen 4. Anything else is compensated by the infinity cache, it does not need 128 bit. Also it is faster than the 580, it’s only slower than the 5500XT with 8 GB.
So you're suggesting people will buy a high-end motherboard,
I don’t think B550 is high end, and it could very well be easy to get a mid range board in the future that has newest PCIE standard, we will see. The situation wasn’t as bad as people make it seem to be, anyone who wants to pair a 6500 XT with gen 4 PCIE now can buy cheap enough boards for it, and if AMD decides to use PCIE gen 7 in the future for another low card to save lanes, it will be entirely doable with mid range mainboards as well. However, it’s unlikely the low end cards in the future will even need newest PCIE unless we’re really talking about 4 lanes or less on the card. We will see.
 

kanewolf

Titan
Moderator
What data? Before SSDs became sufficiently cost-effective for servers' primary storage, a 32-drives array would barely keep up with dual 10GbE for anything other than sequential transfers.
Have you ever worked in high performance computing? Not 32 drives but 3200 drives.... With infiniband or 16 Gb fiber channel to 100 controllers. Where I worked we had 64Gb connectivity to our main compute hosts with multiple interface cards. 100GE would have been much easier. I made a living getting data onto and off high performance computers.
 
  • Like
Reactions: digitalgriffin
Space is a non-issue: if you have two chips side by side, an x16 interface would easily fit in the dead space between the two. The only reason x16 takes any meaningful amount of space on the desktop is because the slot is 3X longer than the chips it connects, which wastes a ton of space fanning traces across the whole thing on both sides and keeping their lengths matched.

If board space of a PCIe x16 interface genuinely was an issue, then they'd have to trim the 128bits + controls DRAM interface which requires 160+ traces too, almost 3X as many as PCIe x16.

I think memory traces are more an issue with signal integrity, layer cost, and legacy. More traces = more layers = more cost. This is why mITX motherboards cost a premium. They have to have more layers to carry all the signal lanes in a smaller space.
 
I don’t think B550 is high end, and it could very well be easy to get a mid range board in the future that has newest PCIE standard, we will see. The situation wasn’t as bad as people make it seem to be, anyone who wants to pair a 6500 XT with gen 4 PCIE now can buy cheap enough boards for it, and if AMD decides to use PCIE gen 7 in the future for another low card to save lanes, it will be entirely doable with mid range mainboards as well. However, it’s unlikely the low end cards in the future will even need newest PCIE unless we’re really talking about 4 lanes or less on the card. We will see.
I'm not talking about current motherboards, I'm talking about future PCIe 7.0 compatible motherboards. They're still ~7 years away, give or take a year, but rest assured they will be more expensive. Every generation of PCIe gets more expensive to implement, and it's why there are still "modern" boards that don't have PCIe 4.0 support, and why AMD's Zen 4 will have lots of options that omit PCIe 5.0 support for graphics. Carry that forward two more generations and I suspect we'll have budget boards that still only use PCIe 5.0, with mainstream going to PCIe 6.0 and only the enthusiast and server products opting to include PCIe 7.0. And then when PCIe 8.0 and 9.0 inevitably come along, we may see an even wider spread of supported speeds. LOL
How much did 32GB of decent DDR3 cost in 2012? How much does 32GB of contemporary decent DDR4 cost? About $140 in both cases. The cost per GB hasn't changed much if at all in the last 10 years apart from the DRAM price bubble where everything more than doubled.
I know I upgraded to 32GB of DDR4-3200 for about $100 on one system before the pandemic messed everything up. DDR5 is more expensive, but it's also different from GDDR5/GDDR6, so I'm not sure what will happen there. Right now, 32GB (2x16GB) DDR4-3200 CL16 starts at around $90, with nicer CL14 kits going for $200 or so (and delivering maybe 3% more performance overall). DDR5-5600 CL30 kits go for over $300, though I suspect real pricing will stabilize at under $200 in the coming year for such kits. Anyway, the point is that GPUs with 4GB are now the bare minimum, and by the time PCIe 7.0 rolls around I'd expect 8GB at least, possibly 12GB or more. Three 4GB GDDR7 chips on a 96-bit interface will probably be a "budget" solution in 2028/2029.
 

InvalidError

Titan
Moderator
I think memory traces are more an issue with signal integrity, layer cost, and legacy. More traces = more layers = more cost. This is why mITX motherboards cost a premium. They have to have more layers to carry all the signal lanes in a smaller space.
Maybe in olden days where board manufacturers used to be able to cram 128bits memory interfaces in four layers and had a little more slack to afford detours. Now that six layers are practically mandatory even on lower-end boards to reliably accommodate 3200+MT/s RAM, PCIe 4.0, USB3-gen2/USB4/TB4, power planes for CPUs that may pull 150+A transients, etc., I doubt there is any memory-related reason for ITX to have any more layers. Looking at ITX and ATX board layouts, they are fundamentally unchanged since CPU and DIMMs have to be as close as practical limits will allow out of necessity, leaving very little no room for board manufacturers to improvise in that area.

If anything, ITX boards having only one DIMM per channel and SO-DIMMs being 60mm narrower (a lot less space needed for length-matching) should make memory layout considerably simpler.
 

thisisaname

Distinguished
Feb 6, 2009
799
438
19,260
The preliminary specification of PCIe 5.0 was announced in 2017, but it was formally announced in May of 2019 but it was 2022 before it was supported. At that rate it is going to be well after 2030 before we see it on the desktop?

As we saw with the jump to PCIe 4.0 and 5.0, the length of PCIe traces will again shorten due to the faster signaling rates. This means the minimum allowable distance without additional componentry between the PCIe root device, like a CPU, and the end device, like a GPU, will shorten. As a result, motherboards will need more retimers and thicker PCBs comprised of higher-quality materials than we saw with prior generations of the interface. This means that PCIe 7.0 support will result in yet another hike in motherboard pricing.

This does not look good.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
Yes, either they will use another material for data transfer that is cheaper than the current one, hence saving costs on higher buses or the costs will rise with every gen of PCIE since 5.0. Gen 1 to 4 were less complicated and cheaper to produce, 5.0 and future buses are a departure from that.
 

peachpuff

Reputable
Apr 6, 2021
592
618
5,760
Most are but how many people actually have access to 10gbps Wan connections at home?

Even my fiber ISP only offers 3gbps connections as the fastest you can get as a home user currently and most other people are stuck on cable with even slower speeds. When there is a market for it you will see the products!
Less wan more lan, 10gbit switch and insanely pricey.
 

InvalidError

Titan
Moderator
Because these things take a long time to land, it’s just specifications at this point, much less a finished product.
They don't really take that long.

It took eight years for PCIe 3.0 to get superseded mainly because it took that long for SSDs to invade datacenters in droves, lighting the fire under engineers' asses to come up with something faster. PCIe 4.0 was far too little far too late by the time it hit the market, hence its nearly immediate replacement by 5.0 merely two years later, 6.0 possibly entering the market as soon as next year and 7.0 getting tabled for 2-3 years later.

When there is an urgent need to get something done for a high-volume market with deep pockets, things can go pretty quickly.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
They don't really take that long.

It took eight years for PCIe 3.0 to get superseded mainly because it took that long for SSDs to invade datacenters in droves, lighting the fire under engineers' asses to come up with something faster. PCIe 4.0 was far too little far too late by the time it hit the market, hence its nearly immediate replacement by 5.0 merely two years later, 6.0 possibly entering the market as soon as next year and 7.0 getting tabled for 2-3 years later.

When there is an urgent need to get something done for a high-volume market with deep pockets, things can go pretty quickly.
I didn’t talk about servers here since we don’t use them at home, so no, i think my opinion makes a lot of sense. Also concurring with the editor here.
 

InvalidError

Titan
Moderator
That's the point, they don't have them because they're pricey. We've been stuck on 1gbit on mobo's for over a decade, why?
Plenty of reasons, starting with most people having limited use for a LAN beyond internet sharing and no interest in running cat6e+ wiring when MU-MIMO WiFi reigns supreme in this age of portable media consumption.

If I hadn't already wired my apartment for GbE before 11ac, I would probably have gone wireless apart from my ATA and main PC that is 6' from my router.
 

TRENDING THREADS