Question SSD random reads seem particularly slow

Sep 5, 2022
6
0
10
I've recently changed a SATA HDD to a Crucial SSD on my laptop. It also has an nvm2 SSD as the primary drive. The secondary SSD drive is reading very slowly at certain times. I've checked that trim is occurring (and manually triggering through optimising), installed Crucial Storage Executive to ensure that the firmware is updated, checked that AHCI appears to be working.

I enabled momentum cache which seems to have slightly inflated the write speeds.

Could anyone help please?

vfxM1E7.jpg
 
I've recently changed a SATA HDD to a Crucial SSD on my laptop. It also has an nvm2 SSD as the primary drive. The secondary SSD drive is reading very slowly at certain times. I've checked that trim is occurring (and manually triggering through optimising), installed Crucial Storage Executive to ensure that the firmware is updated, checked that AHCI appears to be working.

I enabled momentum cache which seems to have slightly inflated the write speeds.

Could anyone help please?

vfxM1E7.jpg
Post a link to the ssd you bought.
 
What chipset driver are you using? The INF file.

From your motherboard maker?

Or as supplied by Microsoft?

The choice made a significant difference on one of my drives. Microsoft INF file clearly slower.

Might not make a difference, but I'd consider it.
 
Sep 5, 2022
6
0
10
Hi I enabled momentum cache to help try to improve the performance of the drive which was acting slow. I knew that the high numbers were inflated by the cache but the poor numbers were still poor. I've disabled the cache and rerun crystalmark benchmarking and it's pretty bad across the board:

9GcV1Rf.jpg



@Lafong - are you talking about the SATA driver? I'm using an intel driver for that rather than the generic windows driver I think:

1nDmsc9.jpg
 
Hi I enabled momentum cache to help try to improve the performance of the drive which was acting slow. I knew that the high numbers were inflated by the cache but the poor numbers were still poor. I've disabled the cache and rerun crystalmark benchmarking and it's pretty bad across the board:

9GcV1Rf.jpg



@Lafong - are you talking about the SATA driver? I'm using an intel driver for that rather than the generic windows driver I think:

1nDmsc9.jpg
Post a screenshot from crystal disk info for this disk.
 
@Lafong - are you talking about the SATA driver? I'm using an intel driver for that rather than the generic windows driver I think:

No.

I was referring to a motherboard-specific file. Not drive-specific.

Typically available at the support area for your specific motherboard at your motherboard manufacturer's web site.

I guess you have a laptop....so it would likely be found at the laptop maker's support site for that specific laptop model. There may be a recent update.

May or may not help. I don't know what you should expect from that drive. I have a Crucial, but not that one.
 
Sep 5, 2022
6
0
10
That looks like a faulty drive, ready to be replaced under warranty.

Thanks for the advice - would you mind please expanding on why it appears faulty so that I can have a reasoning for my RMA? Is it possible to tell whether I've installed it badly or it's a faulty cable rather than an SSD specific fault? Would a fault in either of those areas result in a slow read or would it be binary and simply not work?
 

USAFRet

Titan
Moderator
Thanks for the advice - would you mind please expanding on why it appears faulty so that I can have a reasoning for my RMA? Is it possible to tell whether I've installed it badly or it's a faulty cable rather than an SSD specific fault? Would a fault in either of those areas result in a slow read or would it be binary and simply not work?
The read/write in your CrystalDiskMark result is very very much too slow.
Like about 10% of what it should be.

Swap the SATA data cable.
Swap to a different SATA port.

If it still does that, it is a bad drive.
 

MWink64

Prominent
Sep 8, 2022
190
56
670
Plain and simply, the answer is most likely just that it's a Crucial BX500. This model is notorious for doing exactly this. I've been battling one with the exact same issue for the last 3 years. Ironically, I bought it because I was worried that I was using too many of those cheap Inland drives (all of which are still working perfectly). I have tried everything with the BX500 and have only found one solution, to wipe the drive. It does not matter what system it's in (new, old, Intel, AMD), what SATA drivers you use, how empty the drive is (even if just 10% full), if it's overprovisioned, that TRIM is enabled (and even executed manually), or how much idle time it has. It will inevitably get progressively slower over a course of weeks to months. I have seen mine drop to the point where a 20 year old hard drive would run circles around it. It can literally take 15-20 seconds to start something as simple as Notepad or Paint, and 30+ seconds to start a web browser. The ONLY solution I've found is to wipe the drive (secure erase/sanitize). At this point, I just image the drive to a HD, wipe it, then restore the image. After doing that, it performs great... For a while.

I would love to know what it is that causes the issue with this specific model. I've used plenty of DRAM-less drives in similar setups, so it can't simply be that. I have decade-old, bargain basement drives that perform lightyears better. If anyone has any theories as to what's up with these drives, I'd love to hear them. It's a shame because the Crucial MX500 is a great drive.
 
The ONLY solution I've found is to wipe the drive (secure erase/sanitize). At this point, I just image the drive to a HD, wipe it, then restore the image. After doing that, it performs great... For a while.

Instead of erasing the drive, and thereby reinitialising the FTL and other firmware components, why not simply refresh each LBA by reading it and writing it back? This will test whether the cells have "decayed" to the point that internal error recovery is working all the time.

Puran's Diskfresh is freeware.

https://www.puransoftware.com/DiskFresh.html
 

MWink64

Prominent
Sep 8, 2022
190
56
670
Instead of erasing the drive, and thereby reinitialising the FTL and other firmware components, why not simply refresh each LBA by reading it and writing it back? This will test whether the cells have "decayed" to the point that internal error recovery is working all the time.

Puran's Diskfresh is freeware.

https://www.puransoftware.com/DiskFresh.html

A decade old utility (meant for hard drives), from a company I've never heard of, that also offers snake oil (registry cleaner/defrag). Thanks but I think I'll pass. Though, you are right, this might provide some evidence as to whether the issue is a result of degradation that has not become so severe that it causes the drive to be completely unable to reconstruct the data. This program seems like a really bad idea to run on an SSD. I assume it re-writes every LBA, regardless of whether it contains useful data. That's going to result in a full drive write and likely cause the drive to have to do a ton of garbage collection. That's a lot of unnecessary wear.
 
A decade old utility (meant for hard drives), from a company I've never heard of, that also offers snake oil (registry cleaner/defrag). Thanks but I think I'll pass. Though, you are right, this might provide some evidence as to whether the issue is a result of degradation that has not become so severe that it causes the drive to be completely unable to reconstruct the data. This program seems like a really bad idea to run on an SSD. I assume it re-writes every LBA, regardless of whether it contains useful data. That's going to result in a full drive write and likely cause the drive to have to do a ton of garbage collection. That's a lot of unnecessary wear.
It will consume one P/E cycle, just like your proposed secure erase. I'm suggesting that this would be one way to determine whether charge decay is the root cause of the slowdown rather than some firmware dependent anomaly. A secure erase will reset everything, including the firmware structures, so you will be none the wiser.

As for Diskfresh, it's free. It doesn't matter that it was written with a hard drive in mind. Its purpose is the same in this particular case. If you can find another free tool to do the same, then by all means go for it.

BTW, I'm not recommending that rewriting every LBA is something that one should normally do. Far from it. I'm only suggesting that this is one troubleshooting approach that would enable you to narrow down the cause of the slowdowns by separating two variables. Essentially you should try to change one variable at a time and observe the result. That's just common sense. Changing everything all at once (ie shotgunning) achieves nothing.