News Other PCIe 5.0 SSDs are Also Crashing Instead of Throttling

PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.

What we need is further iteration and refinement on PCIe 3.0 and 4.0 controllers.

The only persuasive argument I can see for PCIe 5.0 is if you wanted to use it at just x2, so you could pack in more drives. But, that's not how I think most people are using them.
 
PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.
The first generation or two of anything going into a new standard in the consumer space is almost always plagued with teething issues on top of having little to no benefit over cheaper, more mature stuff from the previous generation in most everyday uses. For normal people, it is usually much better to skip them.

PCIe 4.0 SSDs were a hot mess too for the first two years. Now they are the mature low-power budget-friendly option that obsolesced 3.0 drives.
 
  • Like
Reactions: Makaveli
PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.

What we need is further iteration and refinement on PCIe 3.0 and 4.0 controllers.

The only persuasive argument I can see for PCIe 5.0 is if you wanted to use it at just x2, so you could pack in more drives. But, that's not how I think most people are using them.
We finally had an adequate speed update with PCIe 3.0 drives coming from SATA 6G. Then they released PCIe 4.0 with drives that can hit ludicrous 7300MB/s that gets toasty but is easily manageable with passive heatsinks. With all the heat PCIe 5.0 brings to the table I don't feel like its a wise direction, yet, for the consumer industry. Kind going off what your saying. I would have loved to see PCIe 2X linked NVMe drives for PCIe 5.0 and been able to cram more drives into the same space. Even if that meant using riser's, PCIe AIB or even having the slots vertical to the board vs horizontal allowing for more drives. Me personaly, I can never have enough storage space. I just upgraded my HDD storage pool's to 80TB each (currently at 50% capacity but gobbling up more every day) for redundancy.

Ultimately though once they get heat under control (ie not needing active cooling as a rule and not the exception) then I think PCIe 5.0 drive's heat issues will be ready for the main stream. And honestly had they converted to 5.0 drives to 2X links this gen on consumer boards by the time we hit PCIe 6.0 that would in theory have that level of heat we are getting now with 5.0 4x links, manufacturers would of had time in the server space to learn to cool these things. I mean I love more speed and new things but direction were going with heat on these ssds has me a little hot under the collar to say the least. But at the end of the day progress is progress. Happy to see it even if I think there would have been a better way.
 
Last edited:
  • Like
Reactions: khaakon
Who would be so technically challenged as to run a PCIe 5 SSD without a heatsink? It's technically ignorant.
Don't blame users.

M.2 is a ridiculously flawed design for desktop PC.

Who the hell decided this hillbilly M.2 design with tiny screws, pasted on heatsinks, sitting in between a hot CPU and GPU, was a good idea on PC. You have to literally take off the CPU cooler and heatsink on some PC to access the M.2 slot.

I don't even want to know how many dropped that tiny M.2 screw into their PSU. The fact M.2 need tiny standoffs is icing on the cake. Awful design.

M.2 was designed for hyper thin notebooks where people don't touch the thing. It's not a desktop design.

U.2 like servers should be standard on desktop PC.

M.2 sucks.
 
Last edited:
U.2 like servers should be standard on desktop PC.

M.2 sucks.
U.2 sucks for consumers: you need $25-30 cables to connect each drive to the motherboard, you need to run power cables to each drive and the motherboard needs the added cost of PCIe retimers for signals to survive the trip from CPU to U.2 board connector through that cable. With M.2, you just slap a connector next to the CPU or chipset and call it a day, save ~$30 per port. The drives themselves are $5-10 cheaper from not requiring a separate housing and being able to use a simple card edge connector built from the PCB instead of as an additional part too.

If you don't like where M.2 slots are on typical motherboards, then petition motherboard manufacturers to give you extra PCIe slots instead and slap M.2 SSDs on those.

The vast majority of PC users likely don't even open their PCs for cleaning and won't care where the SSDs are located. They would care that a PC with slightly more serviceable SSD slots cost $50-100 more than an otherwise same-spec system with motherboard-mounted M.2 slots.
 
PCIe 4.0 SSDs were a hot mess too for the first two years. Now they are the mature low-power budget-friendly option that obsolesced 3.0 drives.
Not sure about the "low-power" part, though. I think they're still higher-power than most PCIe 3.0 drives.

I just bought a SK Hynix P31 Gold, and the professional reviews I read all basically gushed about how it never encountered thermal throttling. And it's just a PCIe 3.0 drive. Compared with some of the more power-efficient PCIe 4.0 drives, I see that active power is comparable (folder copy use case, so mostly sequential and QD=1 test case), but their idle is still about twice as high as the P31 Gold's.
 
Last edited:
I'm fine buying into mature, proven stuff. My last attempt at buying something new and somewhat exciting was Intel A750 and all I got was random crashing under all circumstances for two days.
You also bought an "open box" unit. Don't leave that part out. It's unknowable whether any of the problems you experienced were due to actual hardware problems with that unit.

For my money, "open box" items usually aren't marked down enough to be worth the potential for problems.
 
  • Like
Reactions: atomicWAR
Tom's Hardware is currently testing the new firmware that fixes the issue. Manufacturers have already started to load the new firmware into their SSD Toolbox programs so end users can enable a shutdown fix. You can read about it here:


To be very clear, E26 was designed from the start to be used with a heatsink. The product manuals state that. Bare drives were developed and released to be used in conjunction with motherboard's Gen5 heatsink. If you board does not ship with a heatsink you can buy a drive that ships with a heatsink. Not using a heatsink with E26 is user error.
 
  • Like
Reactions: bit_user
You also bought an "open box" unit. Don't leave that part out. It's unknowable whether any of the problems you experienced were due to actual hardware problems with that unit.

For my money, "open box" items usually aren't marked down enough to be worth the potential for problems.
I am actually ok with open box if the mark down is worth while. BUT I don't know if I would have gone with a first gen product from a 'new' player, relative to gpus, for exactly the reason of not knowing if its 'its' hardware, a software issue or just these cards in general when something goes wrong. But for budget shoppers I do get trying to get more for you money so to speak. Open box has certainly allowed me to step up a product tier in the past when money was tight. You do have to be aware though...get it working in the first 72 hours, give or take, and if it doesn't work proper you'll need to return it while your window to do so is still open.. Open box always comes with a certain level of risk/hassle that much is for sure.
 
  • Like
Reactions: bit_user
To be very clear, E26 was designed from the start to be used with a heatsink. The product manuals state that. Bare drives were developed and released to be used in conjunction with motherboard's Gen5 heatsink. If you board does not ship with a heatsink you can buy a drive that ships with a heatsink. Not using a heatsink with E26 is user error.
Yup, I take the point. My comments weren't actually in reference to this particular "issue".

Chris, are you aware of any riser cards or any other ways of using PCIe 5.0 SSDs at x2, as a way of pack more drives into a PC?

Do you have any data on what it would do to power consumption to use these drives at PCIe 5.0 x2? Would it have a measurable impact?
 
  • Like
Reactions: atomicWAR
Chris, are you aware of any riser cards or any other ways of using PCIe 5.0 SSDs at x2, as a way of pack more drives into a PC?

Do you have any data on what it would do to power consumption to use these drives at PCIe 5.0 x2? Would it have a measurable impact?
There are PCIe packet switches that will give, say, x2 lanes to each SSD/M.2 slot, although not really for Gen5 in the consumer space (Sabrent sells a Gen4 one IIRC).

You can reduce power by throttling link speed (Gen) or link width (lanes). Reducing width is more effective in lowering power consumption. (if you need sources and details on this, you will have to find me on reddit/discord/etc)
 
You can reduce power by throttling link speed (Gen)
Well, yes. That rather defeats the point, however. It'd be a waste to pay for a PCIe 5.0 drive with the plan only to run it at PCIe 3.0 or 4.0 speed.

or link width (lanes). Reducing width is more effective in lowering power consumption.
If keeping the same speed, it's not obvious to me how much this helps. That's why I asked for data on it.
 
PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.

What we need is further iteration and refinement on PCIe 3.0 and 4.0 controllers.

The only persuasive argument I can see for PCIe 5.0 is if you wanted to use it at just x2, so you could pack in more drives. But, that's not how I think most people are using them.
One of the main reason's try to avoid first gen products.

I will look at these again after Zen 5 is out and the motherboard refresh is out for AM5. We should have 2nd gen controllers for the drives by then.
 
  • Like
Reactions: bit_user
Well, yes. That rather defeats the point, however. It'd be a waste to pay for a PCIe 5.0 drive with the plan only to run it at PCIe 3.0 or 4.0 speed.


If keeping the same speed, it's not obvious to me how much this helps. That's why I asked for data on it.
Right, but reducing link speed is what they are doing here as a solution. If you meant it in other context then that's fine, I didn't really have time to analyze the thread. I was referring to methods of dynamic throttling not permanent status, and reducing link width is generally more efficient. I don't have data on Gen5, for ≤Gen4 halving the lanes doesn't quite halve the power draw. Going down to x2 from x4 rather than a generation is around ¼ more efficient.
 
Not sure about the "low-power" part, though. I think they're still higher-power than most PCIe 3.0 drives.
About 1W higher while active, practically the same ~750mW at idle for the more power-efficient models since the interface should drop down to the same low-power states between bursts regardless of how fast the interface's maximum speed is unless power-saving features are disabled, limited or not working correctly.

On my GTX1050, I was seeing the link speed go anywhere from 1.0 to 3.0 speed depending on whether I was doing something. My RX6600 is either stuck at 4.0 no matter what (could be part of why it is drawing 20W idle instead of the 15-16W it is supposed to) or changes aren't being reported in SMB data.

You also bought an "open box" unit. Don't leave that part out. It's unknowable whether any of the problems you experienced were due to actual hardware problems with that unit.
Between customer and smaller tech channel reviews though, I'm not the only person who ran into what I'd call chronic stability issues with it months into 2023, which gives me some basis to believe my experience wasn't an isolated incident.
 
So... by what I'm 'picking up':
Once the 'less-desirable' Gen5 drives end up on sale, they'll be some of the fastest Gen4 drives available.

As Gen5 drives, they're running TDH; I wonder how they do under full-load un-heatsink'd at Gen4x4?
 
When there is a bus increase to the flash speed you will see an increase. So this Gen5 drive has the newest Micron B58R flash. The random performance increase will carry over to Gen4. You will just be capped at Gen4 sequential performance. You will get the best Gen4 performance with this Gen5 drive simply because the latency decrease....the part that makes your PC feel fast.

The same was true with Gen4 drives in Gen3 systems after buying a drive with B47R.
 
  • Like
Reactions: bit_user
PCIe 5.0 is not justified for virtually any consumers. It just adds cost, heat, and headaches, for a modest performance improvement you won't notice in practice.

What we need is further iteration and refinement on PCIe 3.0 and 4.0 controllers.

The only persuasive argument I can see for PCIe 5.0 is if you wanted to use it at just x2, so you could pack in more drives. But, that's not how I think most people are using them.

meh Gen 6 is also coming ... you cant stop that and is good . for example , just few years ago , raid 0 16 lanes PCIE NVME cards existed with huge heatsink and fan and used 4 SSD in raid 0 and costed alot... now you can have the same thing 1/10 the the price and generates less heat.

What wee need is not better controllers , what we need is faster solid state memory ... controllers will not add much ..
 
About 1W higher while active, practically the same ~750mW at idle for the more power-efficient models since the interface should drop down to the same low-power states between bursts regardless of how fast the interface's maximum speed is unless power-saving features are disabled, limited or not working correctly.
Link ASPM adds latency. It takes time to "wake up" from such a low power state, which is why some people don't use it and why some reviewers test with it disabled.

On my GTX1050, I was seeing the link speed go anywhere from 1.0 to 3.0 speed depending on whether I was doing something.
Noted. Thanks for sharing.
: )
 
meh Gen 6 is also coming ... you cant stop that and is good .
I said it before and I'll say it again: Gen 5 doesn't (currently) make sense for consumer PCs. The only real use case I can see is lane reduction, but we haven't observed examples of anyone doing that outside of PCIe 4.0 graphics cards.

I think Intel jumped the gun on putting PCIe 5 in consumer PCs. It might've been done for "specmanship" vs. AMD, or as a test & debug platform so their PCIe 5.0 IP and others' peripherals could be tested and tweaked prior to PCIe 5.0 landing in Sapphire Rapids. There is no doubt it added to the cost of current gen motherboards.

I do not expect to see a repeat with PCIe 6.0. Maybe it will, but I sure wouldn't count on seeing PCIe 6.0 for consumers, anytime soon. The only thing that might change that is if CPUs adopt in-package memory and want to provide CXL 2.0 as a memory expansion option.
 
I do not expect to see a repeat with PCIe 6.0. Maybe it will, but I sure wouldn't count on seeing PCIe 6.0 for consumers, anytime soon.
Many lower-cost motherboards already skip 5.0 due to cost reasons. It can only get worse with 6.0.

We're headed towards the point where external interfaces are too fast for conventional PCB materials and bringing essential high-speed stuff on-package (ex.: main system memory) will become a necessary cost-cutting measure. Maybe we'll see a return to the Slot-1/A style setup where the CPU and its highest-speed peripherals like DDR7 and 6.0x4 NVMe can be on a fancy circuit board separate from the rest of the motherboard made of standard materials for 5.0-or-slower stuff.
 
  • Like
Reactions: bit_user