Question ssd degradation

Jun 28, 2019
5
0
10
I have 4 year old Kingston SV300SV73A120 G Kingston SSD manager indicates 2 entrys for wear indictor 11 and 83 2 different totals (not sure why there are 2 entrys). Consequently it indicates warning and DEGRADATION. I dont understand how this is possible given I use my PC less than the average home user, no games , no movies , very few downloads. I don't usually power on and off. Am I missing something as to why the SSD appears to be on the verge of failure despite its what I believe is pretty light use. Any ideas would be appreciated. Thanks.





Device Path: \\.\PHYSICALDRIVE0
Vendor/Model: KINGSTON SV300S37A120G
Serial #: 50026B775C02DC8E
Firmware: 60AABBF0_33510

Id [hex] - Description:
---------------------------------------------
Norm Worst Thresh Raw Flags

======================================================

1 [0x01] - Read Error Rate:
---------------------------------------------
95 95 50 0x0000000001518feb 0x32,SP,EC,OC

5 [0x05] - Bad Block Count:
---------------------------------------------
100 100 3 0x0000000000000000 0x33,SP,EC,OC,PW

9 [0x09] - Power On Hours:
---------------------------------------------
77 77 0 0x002c778e00004f47 0x32,SP,EC,OC

12 [0x0c] - Power Cycles:
---------------------------------------------
99 99 0 0x00000000000007c7 0x32,SP,EC,OC

171 [0xab] - Program Fail Count:
---------------------------------------------
100 100 0 0x0000000000000000 0x0a,ER,OC

172 [0xac] - Erase Fail Count:
---------------------------------------------
100 100 0 0x0000000000000000 0x32,SP,EC,OC

174 [0xae] - Unexpected Power Loss:
---------------------------------------------
0 0 0 0x00000000000000c3 0x30,SP,EC

177 [0xb1] - Wear Leveling Range Percent:
---------------------------------------------
0 0 0 0x0000000000000001 0x00

181 [0xb5] - Program Fail Count:
---------------------------------------------
100 100 0 0x0000000000000000 0x0a,ER,OC

182 [0xb6] - Erase Fail Count:
---------------------------------------------
100 100 0 0x0000000000000000 0x32,SP,EC,OC

187 [0xbb] - Uncorrectable ECC Errors:
---------------------------------------------
100 100 0 0x0000000000000000 0x12,EC,OC

189 [0xbd] - Temperature (alt):
---------------------------------------------
29 37 0 0x000000100025001d 0x00

194 [0xc2] - Temperature:
---------------------------------------------
29 37 0 0x000000100025001d 0x22,SP,OC

195 [0xc3] - RAISE Recovered:
---------------------------------------------
120 120 0 0x0000000001518feb 0x1c,EC,ER,PE

196 [0xc4] - Reallocation Events:
---------------------------------------------
100 100 3 0x0000000000000000 0x33,SP,EC,OC,PW

201 [0xc9] - Uncorrectable Soft ECC Rate:
---------------------------------------------
120 120 0 0x0000000001518feb 0x1c,EC,ER,PE

204 [0xcc] - Soft ECC Correction Rate:
---------------------------------------------
120 120 0 0x0000000001518feb 0x1c,EC,ER,PE

230 [0xe6] - Drive Life Protection Status:
---------------------------------------------
100 100 0 0x0000000000000064 0x13,EC,OC,PW

231 [0xe7] - SSD Wear Indicator:
---------------------------------------------
11 11 11 0x0000000e00000001 0x00

233 [0xe9] - Lifetime Nand Writes:
---------------------------------------------
0 0 0 0x000000000001c51e 0x32,SP,EC,OC

234 [0xea] - Lifetime Host Writes:
---------------------------------------------
0 0 0 0x0000000000003997 0x32,SP,EC,OC

241 [0xf1] - Lifetime Host Writes:
---------------------------------------------
0 0 0 0x0000000000003997 0x32,SP,EC,OC

242 [0xf2] - Lifetime Host Reads:
---------------------------------------------
0 0 0 0x00000000000036be 0x32,SP,EC,OC

244 [0xf4] - SSD Wear Indicator:
---------------------------------------------
83 83 10 0x00000000020d020d 0x00
 
D

Deleted member 14196

Guest
Your driver is 75% full and if you have not over provision that this would probably be the problem
 

USAFRet

Titan
Moderator
Not if that 25% is formatted. If so he’s not over provisioned
Yes it is. Free space is free space.
'over provisioning' is simply a way to forcefeed that free space.
If you designate "20% OP", then you can use up to 80% of the actual space. Your visible free space can go down to zero.
If you manually leave 20% free space, its the same thing.

The drive shuffles data around between the cells as it sees fit. Formatted or otherwise.
 
Jun 28, 2019
5
0
10
Moved some stuff around free space now 45%....Kingston SSD manager identifies the correct free space amount but still indicates the same same wear totals . CrystalDiskInfo below........notice 11 %SSD life left and 83 %Vendor specific life wear. CrystalInfo says that at 11%the health of the drive is good. I am still confused
----------------------------------------------------------------------------
----------------------------------------------------------------------------
CrystalDiskInfo 8.1.0 (C) 2008-2019 hiyohiyo
Crystal Dew World : https://crystalmark.info/
----------------------------------------------------------------------------

OS : Windows 10 [10.0 Build 17134] (x64)
Date : 2019/06/29 12:18:03

-- Controller Map ----------------------------------------------------------
+ Intel(R) 300 Series Chipset Family SATA AHCI Controller [ATA]
- KINGSTON SV300S37A120G


-- Disk List ---------------------------------------------------------------
(1) KINGSTON SV300S37A120G : 120.0 GB [0/0/4, pd1] - sf

----------------------------------------------------------------------------
(1) KINGSTON SV300S37A120G
----------------------------------------------------------------------------
Model : KINGSTON SV300S37A120G
Firmware : 60AABBF0
Serial Number : 50026B775C02DC8E
Disk Size : 120.0 GB (8.4/120.0/120.0/----)
Buffer Size : Unknown
Queue Depth : 32
# of Sectors : 234441648
Rotation Rate : ---- (SSD)
Interface : Serial ATA
Major Version : ATA8-ACS
Minor Version : ACS-2 Revision 3
Transfer Mode : SATA/600 | SATA/600
Power On Hours : 20298 hours
Power On Count : 1994 count
Host Reads : 14081 GB
Host Writes : 14751 GB
Temperature : 28 C (82 F)
Health Status : Good (11 %)
Features : S.M.A.R.T., APM, 48bit LBA, NCQ, TRIM
APM Level : 00FEh [ON]
AAM Level : ----
Drive Letter : C:

-- S.M.A.R.T. --------------------------------------------------------------
ID Cur Wor Thr Raw Values (7) Attribute Name
01 _95 _95 _50 000000007ED868 Raw Read Error Rate
05 100 100 __3 00000000000000 Retired Block Count
09 _77 _77 __0 19AC0800004F4A Power-on Hours
0C _99 _99 __0 000000000007CA Power Cycle Count
AB 100 100 __0 00000000000000 Program Fail Count
AC 100 100 __0 00000000000000 Erase Fail Count
AE __0 __0 __0 000000000000C3 Unexpected Power Loss Count
B1 __0 __0 __0 00000000000001 Wear Range Delta
B5 100 100 __0 00000000000000 Program Fail Count
B6 100 100 __0 00000000000000 Erase Fail Count
BB 100 100 __0 00000000000000 Reported Uncorrectable Errors
BD _28 _37 __0 0000100025001C Vendor Specific
C2 _28 _37 __0 0000100025001C Temperature
C3 120 120 __0 000000007ED868 On-the-Fly ECC Uncorrectable Error Count
C4 100 100 __3 00000000000000 Reallocation Event Count
C9 120 120 __0 000000007ED868 Uncorrectable Soft Read Error Rate
CC 120 120 __0 000000007ED868 Soft ECC Correction Rate
E6 100 100 __0 00000000000064 Life Curve Status
E7 _11 _11 _11 00000E00000001 SSD Life Left
E9 __0 __0 __0 0000000001C534 Vendor Specific
EA __0 __0 __0 0000000000399F Vendor Specific
F1 __0 __0 __0 0000000000399F Lifetime Writes from Host
F2 __0 __0 __0 00000000003701 Lifetime Reads from Host
F4 _83 _83 _10 000000020D020D Vendor Specific

-
 

USAFRet

Titan
Moderator
I believe the actual "problem" is...."4 year old Kingston "

Not all SSD's are created equal. As I stated above, the only drive of mine that shows actual performance degradation over time is...a 120GB Kingston.

And the V300 line was particularly problematic.
https://www.extremetech.com/extreme...itching-cheaper-components-after-good-reviews
https://techreport.com/review/26664/alleged-bait-and-switch-tactics-spur-kingston-pny-ssd-boycott
 

prophet51

Reputable
Jun 14, 2019
172
28
4,640
Moved some stuff around free space now 45%....Kingston SSD manager identifies the correct free space amount but still indicates the same same wear totals . CrystalDiskInfo below........notice 11 %SSD life left and 83 %Vendor specific life wear. CrystalInfo says that at 11%the health of the drive is good. I am still confused
----------------------------------------------------------------------------
----------------------------------------------------------------------------
CrystalDiskInfo 8.1.0 (C) 2008-2019 hiyohiyo
Crystal Dew World : https://crystalmark.info/
----------------------------------------------------------------------------

OS : Windows 10 [10.0 Build 17134] (x64)
Date : 2019/06/29 12:18:03

-- Controller Map ----------------------------------------------------------
+ Intel(R) 300 Series Chipset Family SATA AHCI Controller [ATA]
- KINGSTON SV300S37A120G


-- Disk List ---------------------------------------------------------------
(1) KINGSTON SV300S37A120G : 120.0 GB [0/0/4, pd1] - sf

----------------------------------------------------------------------------
(1) KINGSTON SV300S37A120G
----------------------------------------------------------------------------
Model : KINGSTON SV300S37A120G
Firmware : 60AABBF0
Serial Number : 50026B775C02DC8E
Disk Size : 120.0 GB (8.4/120.0/120.0/----)
Buffer Size : Unknown
Queue Depth : 32
# of Sectors : 234441648
Rotation Rate : ---- (SSD)
Interface : Serial ATA
Major Version : ATA8-ACS
Minor Version : ACS-2 Revision 3
Transfer Mode : SATA/600 | SATA/600
Power On Hours : 20298 hours
Power On Count : 1994 count
Host Reads : 14081 GB
Host Writes : 14751 GB
Temperature : 28 C (82 F)
Health Status : Good (11 %)
Features : S.M.A.R.T., APM, 48bit LBA, NCQ, TRIM
APM Level : 00FEh [ON]
AAM Level : ----
Drive Letter : C:

-- S.M.A.R.T. --------------------------------------------------------------
ID Cur Wor Thr Raw Values (7) Attribute Name
01 _95 _95 _50 000000007ED868 Raw Read Error Rate
05 100 100 __3 00000000000000 Retired Block Count
09 _77 _77 __0 19AC0800004F4A Power-on Hours
0C _99 _99 __0 000000000007CA Power Cycle Count
AB 100 100 __0 00000000000000 Program Fail Count
AC 100 100 __0 00000000000000 Erase Fail Count
AE __0 __0 __0 000000000000C3 Unexpected Power Loss Count
B1 __0 __0 __0 00000000000001 Wear Range Delta
B5 100 100 __0 00000000000000 Program Fail Count
B6 100 100 __0 00000000000000 Erase Fail Count
BB 100 100 __0 00000000000000 Reported Uncorrectable Errors
BD _28 _37 __0 0000100025001C Vendor Specific
C2 _28 _37 __0 0000100025001C Temperature
C3 120 120 __0 000000007ED868 On-the-Fly ECC Uncorrectable Error Count
C4 100 100 __3 00000000000000 Reallocation Event Count
C9 120 120 __0 000000007ED868 Uncorrectable Soft Read Error Rate
CC 120 120 __0 000000007ED868 Soft ECC Correction Rate
E6 100 100 __0 00000000000064 Life Curve Status
E7 _11 _11 _11 00000E00000001 SSD Life Left
E9 __0 __0 __0 0000000001C534 Vendor Specific
EA __0 __0 __0 0000000000399F Vendor Specific
F1 __0 __0 __0 0000000000399F Lifetime Writes from Host
F2 __0 __0 __0 00000000003701 Lifetime Reads from Host
F4 _83 _83 _10 000000020D020D Vendor Specific

-


Software might be reading 89% life left as 11% life left? 15Tb of writes should be fine.
 
Looking at that SMART data, the drive has apparently only had 14,751 GB of writes performed on in, while the drive's specifications state that its flash cells are rated to handle 64,000 GB of writes (64 TBW). So, the flash memory should likely still have more than 75% of its durability remaining.

From this thread, it sounds like some of those SV300 drives might have had a firmware issue that results in incorrect SMART values getting reported for wear...

View: https://www.reddit.com/r/DataHoarder/comments/79kjw8/ssd_reaching_end_of_life/


That person installed an updated firmware, which seemed to fix their reported wear value (or at least reset it). Your drive looks like it might have the latest firmware, but perhaps running the firmware updater could potentially help...

https://www.kingston.com/us/support/technical/products?model=sv300s3
 
Jun 28, 2019
5
0
10
On going thanks to all for the input.
Firmware is most recent.so that did not change anything. I have backed everything up and do so frequently. So I guess I should continue to use this drive until it fails since it could go on many more years if in fact the actual life is 83% and not 11%?
Passmark Data same as others ...of course I don't know if that means anything.

SMART ATTRIBUTES:
ID Description Status Value Worst Threshold Raw Value TEC
---------------------------------------------------------------------------------------------------------------------------------------------
1 Raw Read Error Rate OK 95 95 50 11078414 N/A
5 Retired Block Count OK 100 100 3 0 N/A
9 Power On Time OK 77 77 0 185628486553419 N/A
12 Power Cycle Count OK 99 99 0 1995 N/A
171 Program Fail Count OK 100 100 0 0 N/A
172 Erase Fail Count OK 100 100 0 0 N/A
174 Unexpected Power Loss Count OK 0 0 0 195 N/A
177 Wear Range Delta OK 0 0 0 1 N/A
181 Program Fail Count (Total) OK 100 100 0 0 N/A
182 Erase Fail Count (Total) OK 100 100 0 0 N/A
187 Reported Uncorrectable Errors OK 100 100 0 0 N/A
189 High Fly Writes OK 29 37 0 68721901597 N/A
194 Temperature OK 29 37 0 29 C N/A
195 On the fly ECC Uncorrectable Error Count OK 120 120 0 11078414 N/A
196 Reallocation Event Count OK 100 100 3 0 N/A
201 Uncorrectable Soft Read Error Rate OK 120 120 0 11078414 N/A
204 Soft ECC Correction OK 120 120 0 11078414 N/A
230 Life Curve Status OK 100 100 0 100 N/A
231 SSD Life Left FAIL 11 11 11 60129542145 N/A
233 SandForce Internal OK 0 0 0 116030 N/A
234 SandForce Internal OK 0 0 0 14752 N/A
241 Lifetime Writes from Host OK 0 0 0 14752 N/A
242 Lifetime Reads from Host OK 0 0 0 14091 N/A
244 (Unknown attribute) OK 83 83 10 34406925 N/A
 
Here is the closest SMART doc I have been able to find:

https://media.kingston.com/support/downloads/MKP_306_SMART_attribute.pdf

Attribute 231 is reporting that the remaining life is being assessed on the basis of the number of spare blocks that are remaining. It is telling us, perhaps incorrectly, that only 11% of the spare blocks remain unused. However, since the Retired Block Count (attribute 5) is zero, this would tend to confirm that the 11% figure is bogus, or that it reflects some other attribute.

Attribute F4 (244) is not documented, but it looks like it may be reporting the number of PE cycles. The raw value appears to be 0x020D. The normalised value of 83 would suggest that this attribute has lost 17 points (= 100 - 83), or is just about to lose 18 (= 100 - 82). If we assume that the maximum raw value occurs when the attribute loses 100 points, then this maximum value lies between 2916 and 3088, ie 3000.

(0x20D / 18) * 100 = 2916
(0x20D / 17) * 100 = 3088

So I'm guessing that the drive has recorded 525 (= 0x20D) PE cycles on average out of a maximum rating of 3000 PE cycles.

BTW the normalised error rates are 120 which is the best possible value, ie error free.

Edit: If the rating is 64 TBW, then this means that the rated number of PE cycles for the drive is ...

64 TB / 128 GiB = 466

The actual NAND capacity is 128GiB (includes overprovisioned space).

Attribute 233 (Lifetime NAND writes) has a value of 0x1c51e (= 115,998), so this would imply that 116 TB have been written to NAND. Assuming that the Host Writes figure of 14,752 TB is correct, then this implies a write amplification factor of 7.9 (= 116 / 14.7). Can this be right?
 
Last edited:
Jun 28, 2019
5
0
10
Here is the closest SMART doc I have been able to find:

https://media.kingston.com/support/downloads/MKP_306_SMART_attribute.pdf

Attribute 231 is reporting that the remaining life is being assessed on the basis of the number of spare blocks that are remaining. It is telling us, perhaps incorrectly, that only 11% of the spare blocks remain unused. However, since the Retired Block Count (attribute 5) is zero, this would tend to confirm that the 11% figure is bogus, or that it reflects some other attribute.

Attribute F4 (244) is not documented, but it looks like it may be reporting the number of PE cycles. The raw value appears to be 0x020D. The normalised value of 83 would suggest that this attribute has lost 17 points (= 100 - 83), or is just about to lose 18 (= 100 - 82). If we assume that the maximum raw value occurs when the attribute loses 100 points, then this maximum value lies between 2916 and 3088, ie 3000.

(0x20D / 18) * 100 = 2916
(0x20D / 17) * 100 = 3088

So I'm guessing that the drive has recorded 525 (= 0x20D) PE cycles on average.

BTW the normalised error rates are 120 which is the best possible value, ie error free.

Thanks for your efforts..I know it just doesn’t make sense that it would be degraded to a critical or fail level when nothing indicates why this could be. Hope it is in fact an error but will stay on top my backups just in case.