News PCIe 6.0 moves closer to arriving in the market in 2024: Alphawave demonstrates interoperability

Status
Not open for further replies.
Don't look for PCIe 6.0 in mainstream PCs, any time in the near future. Supporting PCIe 6.0 will add cost to both devices and boards. Devices' PHYs will need to handle PAM4 and the additional protocol complexity, while boards will need better signal integrity, as PCIe 6.0 is slightly less noise-tolerant than PCIe 5.0. Considering that it's a stretch to justify having even PCIe 5.0 on consumer platforms, I don't see a case to be made for bringing 6.0 (at least, not before we transition over to CPUs with on-package DRAM and using CXL.mem for external expansion).

I said this about PCIe 5.0, but I might actually be right this time!
😅
 
  • Like
Reactions: aberkae
I don't know WTH is going on here because PCIe5 is brand-new and still considered rather exotic. Like, you have to get an "E" version of an AM5 motherboard to have a PCIe5 x16 slot with non-E motherboards having PCIe4 so how is PCIe6 already even a thing???

Colour me confused. 😳
 
I don't know WTH is going on here because PCIe5 is brand-new and still considered rather exotic.
Standards development usually runs a couple years ahead of full production deployment. In the meantime, test equipment makers need to add support for the new standard, so that new designs & products can be validated before they tape out and ship.

After there's a way to test it, the IP, itself, has to be designed & validated, which is apparently where we're now at with PCIe 6.0. There's still another step to go before the IP is integrated into products and those products are validated and launched.

so how is PCIe6 already even a thing???
The datacenter industry is hungry for more bandwidth. For AI, 800 Gbps networking... that sort of thing.

The PCIe 7.0 standard is already in the works.
 
Last edited:
I don't know WTH is going on here because PCIe5 is brand-new and still considered rather exotic. Like, you have to get an "E" version of an AM5 motherboard to have a PCIe5 x16 slot with non-E motherboards having PCIe4 so how is PCIe6 already even a thing???

Colour me confused. 😳
For home use and even most workstations PCIe 5 isn't needed at the time. However, it is VERY MUCH needed in the data centers. The use of physical SANs is slowly going away. For one Fibre Channel isn't keeping up with new storage speeds. The other reason is cloud providers are all going hyperconverged. Having 24x PCIe 5 SSDs is 3Tb of PCIe bandwidth that needs to be sent out to VMs. PCIe 5 allows for 4x 400Gb networking with the 24 SSDs. You can see where doubling that network bandwidth will be very nice.
 
  • Like
Reactions: bit_user
For home use and even most workstations PCIe 5 isn't needed at the time. However, it is VERY MUCH needed in the data centers. The use of physical SANs is slowly going away. For one Fibre Channel isn't keeping up with new storage speeds. The other reason is cloud providers are all going hyperconverged. Having 24x PCIe 5 SSDs is 3Tb of PCIe bandwidth that needs to be sent out to VMs. PCIe 5 allows for 4x 400Gb networking with the 24 SSDs. You can see where doubling that network bandwidth will be very nice.
I get that but that's not what's confusing me. PCIe5 only just came out, making PCIe4 the shortest-lived PCI-Express standard. Now it looks like PCIe5 will have an even shorter lifespan than even PCIe4.
 
I get that but that's not what's confusing me. PCIe5 only just came out, making PCIe4 the shortest-lived PCI-Express standard.
Eh, the current pace probably just seems abnormal because the industry stalled out on PCIe 3.0 for so long.

Figure1.png

Source: https://pcisig.com/blog/evolution-p...eneration-third-decade-and-still-going-strong

IBM had POWER CPUs on the market that supported PCIe 4.0, back in 2018. AMD followed with support for it on Zen 2, in 2019. So, the 2021 introduction of products supporting PCIe 5.0 is pretty much on pace with the standard.

A big factor pushing PCIe 5.0, that shouldn't be underestimated, is CXL 1.0. It shares support for the same PHY as PCIe 5.0, and some server CPUs can reconfigure lanes to run in either mode. CXL is a big deal for servers and hyperscalers. I expect CXL 3.0 will also be one of the main drivers of PCIe 6.0 (again, the two share the same PHY specification).

Now it looks like PCIe5 will have an even shorter lifespan than even PCIe4.
Not on desktops. As I said, I think PCIe 6.0 will remain server-only, until more drastic changes arrive on the mainstream PC platform. Adding it now would just increase costs, while providing no practical benefits.
 
They already released a PCIe version more than a year ago.

I think Nvidia doesn't really care about PCIe 6.0, since they mostly depend on their own NVLink for connectivity, when it counts.
Point taken, but at $8,000-$9,000 the A100 is not exactly consumer grade. Perhaps in a few years with a lower price and better software we will see more success in the consumer market?
 
Point taken, but at $8,000-$9,000 the A100 is not exactly consumer grade. Perhaps in a few years with a lower price and better software we will see more success in the consumer market?
None of the cards at that level are consumer grade. In fact, starting with the A100, they can't even do proper graphics (the A100 has token graphics capabilities, but I'm not sure if the the H100 has any, at all). This basically means you'll never even see those chips on a "Titan" card.

If you want a consumer card, buy one of the RTX models.
 
Last edited:
Don't look for PCIe 6.0 in mainstream PCs, any time in the near future. Supporting PCIe 6.0 will add cost to both devices and boards. Devices' PHYs will need to handle PAM4 and the additional protocol complexity, while boards will need better signal integrity, as PCIe 6.0 is slightly less noise-tolerant than PCIe 5.0. Considering that it's a stretch to justify having even PCIe 5.0 on consumer platforms, I don't see a case to be made for bringing 6.0 (at least, not before we transition over to CPUs with on-package DRAM and using CXL.mem for external expansion).

I said this about PCIe 5.0, but I might actually be right this time!
😅

I guess it'll come down to how much they think pushing it to consumer platforms will drive sales. Basically will the allure of 6>5 outweigh the backlash of higher board prices. Personally I'd take the lower prices instead of something that won't have any tangible benefit to my usage.
 
We can be sure that we don´t get more PCU lanes in home computers, so really few pci 6.0 or 7.0 lines will help in there because you need less bus wide to get the same speed.
Just imagine Nvidia 5090ti using 4bit wide bus!
😉
You just need this super new ans super expensive mother board to use it at full speed! win, win situation to manufacturers. And one year later super version with 8bit wide bus. Still only half what is needed for 4090...

Also more m2ssd without having more PCI lines in cpu and motherboard. These will come market faster than most people expect! It actually can save money to manufacturers and makes MBs obsolete faster! So more money to manufacturers!
 
The most sensible implementation for PCI Express 6.0 is in a multiple host platform like servers and in a very compact PC platform with only one slot PCI Express (typically in x16 lanes) in PCIe 6 .0 mode is available for additional expansion. Consumer Storage in nVME form factor will stuck in PCI Express 5.0 mode since the controller needs to be fabricated in an expensive 2 nm node to work reliably.
 
This may be beneficial for data centers that needs a lot of bandwidth. But for almost all consumer PCs, it is nothing but a white elephant that you pay for. And looking at the current situation where you pay a premium for PCI-E 5.0 enabled motherboards, it will not be surprising for another price hike for PCI-E 6.0 capable motherboards.
 
This is entirely what's driving the rapid fire PCIe standards revision. Current CPUs are significantly more compute dense than ever before so the only thing holding compute back is interconnect.
Yeah, just imagine all the tenants on a 192-core dual-EPYC machine... that's enough compute power to drive a heck of a lot of I/O!

Worse yet, it's not as if you can just pack the machine full of SSDs and be done with it. No... they will be using virtualized storage that sits inside of other machines on the network. All of that disk I/O has to squeeze over the network. That's the sort of thing driving the push for 800 Gbps networking.

To break it down, 800 Gbps = 100 GB/s. PCIe 5.0 x16 = ~64 GB/s. Slight mismatch, there.
 
  • Like
Reactions: thestryker
Status
Not open for further replies.