News PCIe 6.0 SSDs for PCs won't arrive until 2030 — costs and complexity mean PCIe 5.0 SSDs are here to stay for some time

According to "news sources" PCI-E 4.0 won't be available until 2028 at the earliest....and pci-e 5 hardware MIGHT be available to enthusiasts by 2030.....

This smacks of saying "just spend money NOW, don't hold onto it". I wonder who would benefit from buying a journalist/reviewer to say this?
 
I'd say PCIe 4.0 arrived about on time. It came to consumer platforms 7 years after PCIe 3.0. NVMe SSDs were just starting to near the limits of PCIe 3.0, as were GPUs.

When PCIe 4.0 hit, it took SSDs more than a year to actually show a decent improvement from using it! Even high-end GPUs using PCIe 4.0 showed maybe a couple % benefit. Even GPUs as powerful as the RTX 4090 were no exception.

relative-performance_1920-1080.png

Source: https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-pci-express-scaling/28.html

When you look at 1% minimums, the assessment doesn't change much. Here's data from their testing of the RTX 5090 on PCIe 5.0:

If you compare the PCIe 4.0 and 3.0 speeds, in that graph, 4.0 is a whopping 2.6% faster that PCIe than 3.0!
: D

I think we can safely say that PCIe 5.0 was not and is not necessary, on consumer machines!

PCIe 4.0 arguably solves the problem of making it viable to use x8 or x4 lanes for a dGPU (depending on the performance tier), either because the manufacturer cut the lane count, or because you want to use another PCIe slot or two. However, we should consider that newer PCIe standards require more silicon die area for the PCIe controller, which makes this reduction of lane count a somewhat self-fulfilling prophesy.

The only reason I think we got PCIe 5.0 as soon as we did is that Intel was feeling butt-hurt from getting spanked by AMD and thought they'd up the ante. It's pure specsmanship. It's funny to me how, until this year, there were no consumer PCIe 5.0 add-in-cards. The only place consumers were using PCIe 5.0 was in the M.2 slots, which Intel ironically didn't even support until Arrow Lake. ...and the irony doesn't stop there, as Arrow Lake has its own PCIe issues!

In fact, I'd go so far as to say that the only reason RTX 5000 even supports PCIe 5.0 is for AI! And AMD just added it to try and match Nvidia. Neither of them needed it, for gaming purposes.

If we extrapolate from what happened with PCIe 3.0 -> 4.0, I'd say we shouldn't have had PCIe 5.0 until next year. That would push PCIe 6.0 out to 2033. Using the figure of 2030 puts us roughly half-way in between a 7 year follow-on from the actual introduction of PCIe 5.0 and where it should've been. Sounds okay to me.

Then again, it's hard to know exactly what the computing world will look like, in 5 years. Maybe CPUs will all have on-package memory, by then, and CXL memory expansions will be all the rage. That would create a new need for faster PCIe/CXL speeds, and PCIe 6.0 / CXL 3.0 would be the antidote. It'll be interesting to watch.
 
PCIe 6.0 SSD should come fast and use 2 lanes instead of four ... we dont have enough lanes from non server grade CPUs...
IMO, the only thing they need to upgrade is the PCIe speed between the CPU and chipset. Right now, it's 4.0 for both Intel and AMD, though Intel uses x8 and AMD uses only x4. So, AMD needs to do this more than Intel, but I wouldn't mind seeing them both do it, since it's the one place where PCIe 5.0 could have the greatest impact, today.

With dGPUs using ever greater amounts of power, it's actually kind of hard to find a dGPU that uses only 2 slots. From what I've seen, most of them use at least 2.5 slots, at the midrange tier and above. So, you don't even have very many PCIe slots available, in a decently-powered desktop. Go ahead and run your dGPU at PCIe 5.0 x8 (you'll never notice the difference!). That will give you another x8 or two x4 slots that are CPU-connected. Then, if the chipset is PCIe 5.0-connected, that's enough for anything else you'd care to plug in.

Also, don't forget the "sticker shock" people had over motherboard prices, when PCIe 5.0 and DDR5 first hit the market. As the article says, PCIe 6.0 is going to ratchet up board costs yet again. Pushing higher data rates isn't free, at this point. The need for better signal-to-noise ratios will require more PCB layers and more expensive materials. Node shrinks won't nullify this, either.
 
Last edited:
The article said:
As PCIe data rates increase, signal loss, noise, and impedance reduce the allowable copper trace length between the root complex and endpoints. At 16 GT/s (PCIe 4.0), traces can reach up to 11 inches with a 28 dB loss budget, but at 64 GT/s (PCIe 6.0), this drops to 3.4 inches with a 32 dB budget, depending on PCB materials and conditions, according to an Astera Labs presentation.
BTW, you didn't mention anything about power or the additional silicon area required by the newer PCIe standard. 6.0 has new features that, along with its modulation scheme, should significantly increase the die area required for PCIe controllers.

If you add up all the area for the PCIe PHYs, buffers, etc. across both Arrow Lake's I/O die and SoC die, it looks to me like about the same area as 3 P-cores!
Granted, they use a larger node, but stil... that's just PCIe 4.0 and 5.0. It'll be even worse, when you look at 6.0 and add in the additional CXL protocol support that I'm sure we'll have by then.
 
  • Like
Reactions: Makaveli
This smacks of saying "just spend money NOW, don't hold onto it". I wonder who would benefit from buying a journalist/reviewer to say this?
This is something that came to light during an interview with the CEO of a company that designs SSD controllers. So, it's from a good source.

Just because you can formulate some kind of conspiracy in your mind doesn't mean there actually is one.
 
PCIe 6.0 SSD should come fast and use 2 lanes instead of four ... we dont have enough lanes from non server grade CPUs...
Honestly all we need is motherboards with M.2 slots that support 2 PCIe lane connections off the chipset. For most people the bandwidth would be plenty and the advantages from better NAND/controllers would still be there. That would allow for twice as many M.2 slots off the chipset which is going to max out physical space on the vast majority of consumer motherboards.
 
  • Like
Reactions: Notton and bit_user
Still waiting for the price of 4TB/8TB/16TB SSDs to come down.
Can we at least reach price parity between 2TB SSD and 2.5" HDD again?
There've been a lot of denser NAND chips announced, but I think they're still in the pipeline. The 2022-2023 NAND market crash did a lot of damage to the industry and AI has caused havoc since then.

It's too bad U.2 form factor didn't catch on among consumers, because then SSD makers could just stuff a 2.5" enclosure full of a lot more chips than they can currently fit on a M.2 2280 board.
 
It's too bad U.2 form factor didn't catch on among consumers, because then SSD makers could just stuff a 2.5" enclosure full of a lot more chips than they can currently fit on a M.2 2280 board.
This, so much this. Higher capacity M.2 drives always have to use the latest NAND which always carries the highest price. Even on a single sided 2.5" drive it wouldn't be hard to get 8 NAND stacks.
 
  • Like
Reactions: bit_user
IMO, the only thing they need to upgrade is the PCIe speed between the CPU and chipset. Right now, it's 4.0 for both Intel and AMD, though Intel uses x8 and AMD uses only x4. So, AMD needs to do this more than Intel, but I wouldn't mind seeing them both do it, since it's the one place where PCIe 5.0 could have the greatest impact, today.

With dGPUs using ever greater amounts of power, it's actually kind of hard to find a dGPU that uses only 2 slots. From what I've seen, most of them use at least 2.5 slots, at the midrange tier and above. So, you don't even have very many PCIe slots available, in a decently-powered desktop. Go ahead and run your dGPU at PCIe 5.0 x8 (you'll never notice the difference!). That will give you another x8 or two x4 slots that are CPU-connected. Then, if the chipset is PCIe 5.0-connected, that's enough for anything else you'd care to plug in.

Also, don't forget the "sticker shock" people had over motherboard prices, when PCIe 5.0 and DDR5 first hit the market. As the article says, PCIe 6.0 is going to ratchet up board costs yet again. Pushing higher data rates isn't free, at this point. The need for better signal-to-noise ratios will require more PCB layers and more expensive materials. Node shrinks won't nullify this, either.
IMO , M2 slots should be for Notebooks only . Desktop SSD should be boxed and cable connected. 2.5 form factor or smaller . and if they lower the SSD lanes into two lanes only , we could have several ports and thinner cables , and no need for space on the motherboard for more M2 long SSDs ..
Moreover , cooling boxed SSD is way easier than onboard M2 SSD.
 
If you compare the PCIe 4.0 and 3.0 speeds, in that graph, 4.0 is a whopping 2.6% faster that PCIe than 3.0!
: D

I think we can safely say that PCIe 5.0 was not and is not necessary, on consumer machines!
It can be necessary... when the consumer hardware is compromised:

NVIDIA GeForce RTX 5060 Ti with 8GB memory sees up to 10% performance loss when switching to PCIe 4.0
And this is also reflected in the benchmarks: The GeForce RTXD 5060 Ti 8 GB is, on average, 14 (AVG FPS) to 17 percent (percentile FPS) faster with PCIe 5.0 than with PCIe 4.0. In individual cases, the impact is again much greater. Not in scenarios where the graphics card is already unplayable with PCIe 5.0: unplayable remains unplayable.
 
IMO , M2 slots should be for Notebooks only . Desktop SSD should be boxed and cable connected. 2.5 form factor or smaller . and if they lower the SSD lanes into two lanes only , we could have several ports and thinner cables , and no need for space on the motherboard for more M2 long SSDs ..
Moreover , cooling boxed SSD is way easier than onboard M2 SSD.

So while this sounds like a good idea, you need to realize why that's not possible. What we call M2 is really just a horizontal orientated PCIe x4 slot. You can plug a NVME drive directly into PCIe and it'll work. The high signaling rates demand very specific length lanes and power. Think those PCIe GPU vertical mount cables.

Caveat is that there is a way and it's done in data centers. You use an optical transceiver to convert the PCIe signal to light and send it over fiber to another transceiver that uses external power and converts it back to electricity. This is kinda expensive and not something I expect in consumer products.
 
Last edited:
  • Like
Reactions: Rob1C
Interesting how the last SSD article published is that Nvidia wants much faster SSDs and this article is saying Intel and AMD aren't interested in bringing the next faster spec to the consumer market quickly.
 
So while this sounds like a good idea, you need to realize why that's not possible. What we call M2 is really just a horizontal orientated PCIe x4 slot. You can plug a NVME drive directly into PCIe and it'll work. The high signaling rates demand very specific length lanes and power. Think those PCIe GPU vertical mount cables.

Caveat is that there is a way and it's done in data centers. You use an optical transceiver to convert the PCIe signal to light and send it over fiber to another transceiver that uses external power and converts it back to electricity. This is kinda expensive and not something I expect in consumer products.
Sure seems like you're actively trying to be wrong here. U.2 predates M.2 and has scaled with every revision of PCIe to date. These do not need to be connected to a backplane to work and the cable lengths are up to around 2ft. They also absolutely do not use optical connection either.

It's entirely possible this won't be the case for PCIe 6.0+, but that isn't here today and won't be any time soon.
 
  • Like
Reactions: bit_user
Interesting how the last SSD article published is that Nvidia wants much faster SSDs and this article is saying Intel and AMD aren't interested in bringing the next faster spec to the consumer market quickly.
AMD/Intel have no interest because it will likely raise motherboard costs for nothing and won't be viable for mobile. Both companies are beholden to OEMs as that's where the majority of their consumer sales are. The reason nvidia wants much faster SSDs is 100% AI and nothing else so it doesn't matter for the consumer market.
 
Sure seems like you're actively trying to be wrong here. U.2 predates M.2 and has scaled with every revision of PCIe to date. These do not need to be connected to a backplane to work and the cable lengths are up to around 2ft. They also absolutely do not use optical connection either.

It's entirely possible this won't be the case for PCIe 6.0+, but that isn't here today and won't be any time soon.
Do not be rude.

U2 and M2 are not the same nor interchangeable. The post was about M2 and doing a long link connection, something it was not designed to do. PCIe 5.0 allows for slightly over two feet length before the signal has to be terminated. For consumer systems this means you have about two from the CPU since that is where our PCIe 5 bus is terminated at. Enterprise systems will use a PCIe switch chips and the cable only needs to go from near that switching chip to either the device or the backplane where another switching chip exists to regenerate the signal, this is basically what U2 is doing. A serial signal is many times easier to shield against EMI then a parallel one so you need to make more expensive cables and the specification needs to allow for it. PCIe 6 is even shorter and that involves a whole other set of issues.

So yes a cheap SATA like PCIe 6.0 x2 cable for multiple NVMe's on a consumer system isn't physically possible. Enterprises are willing to pay a lot more for solutions that work around this limitations, Toms even did an article not too long ago about it.

https://www.tomshardware.com/deskto...ctions-pcie-70-versions-are-under-development
 
Last edited:
The post was about M2 and doing a long link connection, something it was not designed to do.
There are many such cabling products which do that:

PCIe 5.0 allows for slightly over two feet length before the signal has to be terminated.
Interesting data point, but we weren't necessarily talking about PCIe 5.0. Also, it's contradicted by the article you linked, which says CopprLink (internal) can go 1m at PCIe 5.0 or 6.0 speeds:

LtrskXEYzxmYeUnZztAQib.png


For consumer systems this means you have about two from the CPU since that is where our PCIe 5 bus is terminated at.
I think the assumption you're making is that there's nothing between the CPU and the cable connection, but this wouldn't be the case. I think you'd usually have retimers.

Enterprise systems will use a PCIe switch chips and the cable only needs to go from near that switching chip
PC motherboard chipsets contain a PCIe switch. So, if it's chipset-connected and not CPU-direct, then it could simply be proximate to the chipset.

to either the device or the backplane where another switching chip exists to regenerate the signal, this is basically what U2 is doing.
You don't need an entire switch chip, just to recover signal integrity. Redrivers and retimers are far simpler and cheaper solutions.

Also, are you saying that U.2 is only for connecting to backplanes? Because there are U.2 cables that plug directly into drives. Here's a PCIe 5.0-compatible MCIO to U.2 cable that's 0.5 m:

A serial signal is many times easier to shield against EMI then a parallel one so you need to make more expensive cables and the specification needs to allow for it.
PCIe is serial. Each lane is comprised of an independent pair of bidir serial links.

So yes a cheap SATA like PCIe 6.0 x2 cable for multiple NVMe's on a consumer system isn't physically possible.
So, here are some PCIe 6.0 copper cables of up to 0.5m:

No, they're not cheap, but also those are a pretty long way from being commodity, at this point. I remember when SCSI cables, of various flavors, used to cost some serious money.

Anyway, I know that one poster mentioned PCIe 6.0 x2, but that's clearly off the table, for the foreseeable future. None of the rest of us are talking about it. My U.2 drives are both PCIe 4.0.
 
  • Like
Reactions: thestryker