PCIe 5.0 Ready For Prime Time

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Considering PCIe tends to maintain a fair amount of backward and forward compatibility between generations, cards would still be designed to work with power cables for the majority of users who would still be on motherboards with PCIe 3.0 or earlier.

And due to this multi-generational compatibility, it's probably not going to matter much whether a motherboard features PCIe 4.0 or 5.0 anytime soon, at least on the consumer hardware side of things, since there isn't any immediate need for even 4.0's bandwidth. Even the performance of 3.0 still isn't a significant limitation to today's consumer hardware. Sure, the fastest NVMe SSDs might be topping out 3.0's bandwidth in synthetic benchmarks and straight file copies, but as far as most typical real-world usage scenarios go, like loading applications and games, saving files and so one, other hardware tends to limit performance more than anything, and the bandwidth provided by 3.0 is already getting past the point of diminishing returns.

Graphics cards are even less limited by 3.0. Even an RTX 2080 Ti only manages to show about a 3% performance difference between PCIe 3.0 x16 and x8 on average, and less extreme cards won't even show that. That means even PCIe 2.0 x16, which was on motherboards 11 years ago, still isn't a significant limitation to performance with modern graphics cards.

Support for PCIe 4.0 might be nice to have, but its benefits in terms of real-world performance in desktop systems will likely be limited for many years, unless some new bandwidth-hungry usage scenario pops up. 5.0's near-future benefits in the home are even more questionable, and this is all a bit like comparing the benefits of 10 gigabit ethernet over 5 gigabit, when typical modern home networks are still only built for 1 gigabit, and the vast majority of internet connections only utilize a fraction of that. If it doesn't cost significantly more to build motherboards and processors for PCIe 5.0 right away, then it might be a reasonable option, otherwise 4.0 should be fine.
 
True. It will take years to hardware even outpass 4.0 speed!
In anyway interesting to see what approach Nvidia and Intel does take. To customers "bigger" number is always better, even if it does not have any meaning to normal use.
 

I can think of many uses:
- lower latency since PCIe transactions can be completed 2-4X as quickly
- being able to get away with x4/x8 slots for GPUs (slightly cheaper motherboards and GPUs)
- being able to get away with x2 slots for M.2 so you can have twice as many devices
- only needing two lanes for 40Gbps ThunderBolt instead of 4-8
- feeding a quad-port USB3.1-gen2 XHCI chip with a single lane
- with all extra IO requiring 1/2 or 1/4 as many lanes, chipsets and CPUs should have enough HSIO lanes to power all motherboard features instead of forcing you to choose between PCIe slot #3, USB 3.1-gen2 ports 5&6, M.2 slot #2, etc.
 
AMD only promised AM4 support through 2020. That only means Ryzen 4000. They should probably move to an AM5 socket in 2021 w/ Ryzen 5000. The only downside of changing your socket is angering consumers, and AMD really can't make anyone mad for only keeping a socket across four generations, since Excavator (4th Gen Bulldozer) in 2016.

I'm hoping I can hold out with my i7-2600K for another 2 1/2 years until the next AMD socket comes out, but I'm getting sick of not being able to boot to NVMe. I certainly don't want to buy the last architecture on AM4.
 


That was a complete architectural change with no cross compatibility (with the exception of still running 32bit code). With PCIe, it's already established. The argument is is basically whether or not increasing the speed limit on an existing highway is justified. I'd say bring it, because it certainly wont hurt anything.
 

Windows XP 64bits worked perfectly fine apart from spotty driver support for consumer-level devices. Vista 64bits worked perfectly fine for me too, never had any real issues with it. Vista also brought most of the kernel-level and driver model changes still in use today, Windows 7 was better largely because hardware manufacturers had the Vista cycle up to then to get up to speed with all of Vista's changes in their hardware, drivers and related utilities along with Microsoft tweaking its side of that equation.
 
The power of PCIe v5 and Gen-Z is that of the limited number of PCIe lanes (or similar in Gen-Z) can be used for more discrete components. 2 PCIe v4 lanes = 4 PCIe v3 lanes but PCIe v5 needs only one to equal 4 PCIe v3 lanes. Thus more direct attached persistent or non volatile memory (think Flash storage) per CPU socket. The low level technology is cleaner in PCIe v5 than v4 so the year or so of access of PCIe v4 may be a pain in stability issues until you age out that device.
 
Hopefully we'll have 1 of 2 things happen - add in cards such as raid controllers, 100 Gbps nics and gaming video cards will now come in 4x varieties or Intel finally pulls the stick out of its collective ass and doubles the number of PCIe lanes on standard workstation motherboards. They can't keep up with intel multiprocessor tech or fabrication - they might as well grab some low hanging fruit.
 
Intel finally pulls the stick out of its collective ass and doubles the number of PCIe lanes on standard workstation motherboards.
Intel already increased some of the high-end workstation CPUs to 64 lanes, such as the models used in the new Mac Pro.

Basically, those are originally server chips, sitting in sockets originally designed to support up to 8-CPU motherboards. To achieve that, they used up to 3 UPI links. However, the workstation variants of those CPUs are limited to uni-processor configurations. So, I'm guessing they repurposed those pins to export an extra x16 lanes. Internally, the x16 lanes were originally reserved for the Omnipath controller (you may recall that some variants had a fiber optic link on the CPU's package).