Question KC3000 odd behaviour

Nov 28, 2022
3
0
10
So long story short...
I have NVME KC3000 2TB drive that started to behave oddly (at best).
Looks like sectors that was written long time ago (like 3mo+) are hard/slow to read.

I've found similar thread but without resolution https://forums.tomshardware.com/threads/is-my-ssd-too-slow.3779315/

This is "speed" map of my drive part
View: https://imgur.com/a/waWWQ8r



Specs:

Amd 5800X3D
Asus B550F
W10

Drive is in slot with "direct to CPU PCI-E" lane. I've even tried to switching pci-e to GEN3 to see if there is any issue with the link - same behavior.
 
What does SMART look like? What are the values of "Available Spare" and "Media and Data Integrity Errors"?

I've just run Victoria against my Samsung 860 Evo SATA. The surface scan found 2 slow blocks (400ms and 1.6s). However, I noticed that these occurred while my old machine was being tied up by another task.

If you are willing to sacrifice 1 P/E cycle, you could use a tool such as DiskFresh to non-destructively rewrite every sector.

https://www.puransoftware.com/DiskFresh.html

I don't know if Victoria can do the same thing.
 
Nov 28, 2022
3
0
10
Thanks for the reply.
No bad indicators in SMART data.

smart:
View: https://imgur.com/a/9CLd04J


Thanks for "sector refresher" suggestion.
Clearly the "refresh" works, I've run it for a moment just to see that at start of the disk "speed map" is now correct. I've stopped it because I believe Kingstone should fix it, maybe they will get my drive and analyze rest of the sectors.

I believe also that disk controller should work exactly as this "refresher" if sector becomes slow - not the user, but as workaround it might be ok.


after disk refresh (first sectors)
View: https://imgur.com/a/WSK3g6P
 
I was just thinking that, if you delete files, TRIM will be activated. The newly released free space should then be unaffected by speed problems because the drive won't bother to access the TRIM-ed sectors. Instead it will just report zeros. Ultimately that free space will be erased.

I mention this because I'm wondering whether that large good patch in the middle of the user area corresponds to deleted files.
 

MWink64

Prominent
Sep 8, 2022
156
42
620
So long story short...
I have NVME KC3000 2TB drive that started to behave oddly (at best).

Specifically, how is it behaving oddly? Other than that test, is there anything unusual about how it's working under normal use?

Thanks for "sector refresher" suggestion.
Clearly the "refresh" works, I've run it for a moment just to see that at start of the disk "speed map" is now correct. I've stopped it because I believe Kingstone should fix it, maybe they will get my drive and analyze rest of the sectors.

I believe also that disk controller should work exactly as this "refresher" if sector becomes slow - not the user, but as workaround it might be ok.

What exactly should Kingston fix? Before deciding if there's anything to fix, we need to know if there's anything functionally wrong with the drive. In my mind, the jury is still out on whether these tests (originally intended for HDs) are applicable to SSDs. I'm not discounting the possibility but I don't want to jump to any conclusions either.

Are you planning to send your drive back to Kingston? If you expect them to "analyze the rest of the sectors" you'd have to leave all your data on it. Are you comfortable with that? Even if you are, I sincerely doubt they'd actually analyze the sectors in the same way you did. They'd just run some diagnostics and it would either pass or fail.

You say you think the disk controller should work exactly like the refresher. Well, it's not that simple. Even if we take the slow sectors at face value, at what point should the controller decide to refresh them? You have to keep in mind that this requires re-writing the data and will consume the NAND's limited P/E cycles (and increase write amplification). If it's done too aggressively, it will greatly accelerate the speed at which the drive wears out. It's a balancing act and it's quite possible the manufacturer has already considered the issue. Even though the speed has degraded a bit, I suspect the drive would refresh the cells before they degraded to the point where data integrity could be compromised.

Back to my original point. it's hard to help unless we know the actual symptoms. What seems abnormal when you're simply using your PC?
 
I would think that "background data maintenance" would catch weak sectors and preemptively refresh them or reallocate them.

https://www.crucial.com/support/articles-faq-ssd/ssds-and-smart-data

As the SSD continues to age, error correction code (ECC), read retries, adaptive read parameters, background data maintenance, and other adjustments in firmware can correct problems that arise because of gradually degrading data retention. As NAND data blocks degrade, they can be replaced by on-board spares, and normal operations can proceed. Of course, all these background operations take place when power is on, which is why data retention is defined in an unpowered state.

That same document makes this interesting observation about data retention:

The Joint Electron Device Engineering Council (JEDEC) is the industry group which creates standards and specifications for semiconductor-based devices and assemblies. Micron is a leading member of JEDEC, which defines data retention in a specific way: For SSDs in client applications (like business or personal computers), data retention for an SSD shall be one year, in an unpowered state, stored at 30 °C (86 °F). This should give most computer users plenty of time to retrieve any data from an unused drive after some time on the shelf, if needed.
 
Last edited:
Nov 28, 2022
3
0
10
Back to my original point. it's hard to help unless we know the actual symptoms. What seems abnormal when you're simply using your PC?

Loading times for applications are 2-3x longer than previously, my old SSD SATA drive behaves better. If I run for example file copy then system is unresponsive.
In windows task manager I see 50-60MB reads SSD utilisation and anything else becomes super sluggish.

Anyway I see no reason not to use software that can read drive sector by sector to check what is wrong.
All block devices are working in same way logically (from interface standpoint) so using standard access in LBA mode is the way to go - the same way your OS is accessing the drive and this is lowest possible access from user perspective (unless you have low level diagnostic tools). So any tool that can read block in LBA mode will do - and you can benchmark it. If you do not agree with that i can't say anything more.

Even "worse" drives (for example Crucial MX500) performs better in this test and PC flies in compare to KC3000.

"self" copy operations for new files are around 3.8GB/s, for old ones times are unpredictable (drops to 50MB/s or less).

I sincerely doubt they'd actually analyze the sectors in the same way you did. They'd just run some diagnostics and it would either pass or fail.

You probably didn't work as firmware engineer (no offence).

I would think that "background data maintenance" would catch weak sectors and preemptively refresh them or reallocate them.

Exactly.

Stability of the speed map is worrying, because it suggest that wear leveling is working poorly too.
 
Last edited: