[SOLVED] SSD Health

Mar 22, 2021
7
0
10
So this might be a bit of a stupid question but still...
So I have had my SSD for around 2 years. Recently I realized that I never checked the health on it and now that I did Im paranoid about it.
So I checked with HWInfo first and I got a 87% but then using CrystalDiskInfo I got 63% and all of the Writes/Reads seem quite big.
So which one should I trust ?
When is it gonna be time to replace the drive without losing any data or ect. considering my OS and some information is on it?
Is there any way to decrease the writes overtime with any OS settings so life is extended ?
Thank you for answers in advance. Kind of paranoid. :D
(I have a WD Green 240GB Drive by the way.)

CrystalDisk result: View: https://imgur.com/q1iayPu

HWInfo result: View: https://imgur.com/RUnSzpx
 
Solution
Value of interest (hex to dec): average P/E cycles (43 = 67). All other values that would show problems are basically 0, for example grown bad blocks. Drives will start showing bad blocks around 2/3 through their lifespan (good time to start replacing, in my opinion). So we look at general wear instead.

AD (173) is usually an average block-erase count, which at 43 = 67 would be ~16TB of TLC writes committed (erased), 67 * 240 GB. That matches the "Total NAND Writes" value at the top. 16TB sounds like a lot, but generally it's really not.

Host writes are higher and it's likely here the health value is calculated. The 240GB WD Green originally was rated for 80TB written. Mind you, the drive will survive many more writes than that, and...
Mar 22, 2021
7
0
10
You should trust your backup routine. If you're not doing that, start today.

A drive can die at any time, sometimes quite suddenly. Even if various software reports various levels of "healthy".
Yeah backup is always good of course. I have most of the things on alot of drives. Biggest headache is just setting up my whole OS and ect. for a new drive.
I was just seeking general advice. Not worst case scenario. :D
 

USAFRet

Titan
Moderator
Yeah backup is always good of course. I have most of the things on alot of drives. Biggest headache is just setting up my whole OS and ect. for a new drive.
I was just seeking general advice. Not worst case scenario. :D
A good backup routine ALSO includes the OS drive.
Nothing to 'set up' again.



General advice is to always be prepared for a drive fail.
I had a seemingly 'perfect' 960GB Sandisk go from working to dead, just in turning the system on.
It was working. I turned the system off.
5 mins later, power up...."hey, wheres the G drive?"

Bottom line - There is no real way to 'predict' the when.
 
Mar 22, 2021
7
0
10
A good backup routine ALSO includes the OS drive.
Nothing to 'set up' again.



General advice is to always be prepared for a drive fail.
I had a seemingly 'perfect' 960GB Sandisk go from working to dead, just in turning the system on.
It was working. I turned the system off.
5 mins later, power up...."hey, wheres the G drive?"

Bottom line - There is no real way to 'predict' the when.
Then again not everyone has the funds to buy 10 hard drives even if affordable or the fact u would have to back up the os literally 50 times. A lot of things can change in 1 day of using your PC. Backup every single day isnt an option either. I mean my PC can explode when I turn it on tomorrow. Something could still happen to cloud storage. Its all worst cases.
Anyway it seems like asking was a bad decision afterall since we are talking about the worst case scenario again. Thank you.
 

USAFRet

Titan
Moderator
Then again not everyone has the funds to buy 10 hard drives even if affordable or the fact u would have to back up the os literally 50 times. A lot of things can change in 1 day. Backup every single day isnt an option either.
Anyway it seems like asking was a bad decision afterall. Thank you.
A good backup routine can be done to a single external drive.
Don't need "10".

My systems do an Incremental backup every night. All writing to a single folder tree.
Save for 30 days, deleting the oldest as it goes.
All automated, hands off.

How valuable/important is your data?

Asking that question is/was not a bad thing.
But I'm just saying...you cannot predict when a drive might die. Or when it is time to replace.
There is no magic 'health number'.
 
Value of interest (hex to dec): average P/E cycles (43 = 67). All other values that would show problems are basically 0, for example grown bad blocks. Drives will start showing bad blocks around 2/3 through their lifespan (good time to start replacing, in my opinion). So we look at general wear instead.

AD (173) is usually an average block-erase count, which at 43 = 67 would be ~16TB of TLC writes committed (erased), 67 * 240 GB. That matches the "Total NAND Writes" value at the top. 16TB sounds like a lot, but generally it's really not.

Host writes are higher and it's likely here the health value is calculated. The 240GB WD Green originally was rated for 80TB written. Mind you, the drive will survive many more writes than that, and even though many vendors use host writes to track health it's not tied to reality in any way. It's just for warranty.

Modern consumer drives have some of the native flash (TLC) in pseudo-SLC (single-bit) mode, either permanently (static) or variably (dynamic). This SLC mode has higher endurance and can defer writes to the base flash. The SLC block erase count appears to be tracked here, but SLC blocks are smaller (1/3 of TLC) and that's likely an absolute count. For example, 3GB of static SLC is typical for a 240GB WD drive, but static SLC is also rated more like 30K P/E if the base TLC is in the 3K range for example. This gets complicated because with static SLC, the health status of the drive is SLC or TLC while with dynamic it's shared (the SLC and TLC share the same wear zone). So it's possible the health value here also matches SLC writes which is more significant because the cache is so small, but the drive can always write straight to TLC to balance health long-term.

These technical details are a tl;dr of "you're probably fine." But yes, always have a backup plan.
 
Solution
Mar 22, 2021
7
0
10
Value of interest (hex to dec): average P/E cycles (43 = 67). All other values that would show problems are basically 0, for example grown bad blocks. Drives will start showing bad blocks around 2/3 through their lifespan (good time to start replacing, in my opinion). So we look at general wear instead.

AD (173) is usually an average block-erase count, which at 43 = 67 would be ~16TB of TLC writes committed (erased), 67 * 240 GB. That matches the "Total NAND Writes" value at the top. 16TB sounds like a lot, but generally it's really not.

Host writes are higher and it's likely here the health value is calculated. The 240GB WD Green originally was rated for 80TB written. Mind you, the drive will survive many more writes than that, and even though many vendors use host writes to track health it's not tied to reality in any way. It's just for warranty.

Modern consumer drives have some of the native flash (TLC) in pseudo-SLC (single-bit) mode, either permanently (static) or variably (dynamic). This SLC mode has higher endurance and can defer writes to the base flash. The SLC block erase count appears to be tracked here, but SLC blocks are smaller (1/3 of TLC) and that's likely an absolute count. For example, 3GB of static SLC is typical for a 240GB WD drive, but static SLC is also rated more like 30K P/E if the base TLC is in the 3K range for example. This gets complicated because with static SLC, the health status of the drive is SLC or TLC while with dynamic it's shared (the SLC and TLC share the same wear zone). So it's possible the health value here also matches SLC writes which is more significant because the cache is so small, but the drive can always write straight to TLC to balance health long-term.

These technical details are a tl;dr of "you're probably fine." But yes, always have a backup plan.
Thank you so much. This was honestly a good read and finally cleared it up for me. :D
So by math and stats Im guessing it can last atleast a year WITHOUT the unfortunate things that could happen considering I have "written" 31TB in 2 years and considering drive is rated for 80 ?
And what really is the best way to make a 1 to 1 copy of the Windows SSD saving all settings, services and ect. I have made to it ? For example if I do buy another SSD and I just don't wanna clean install on it ?
 

USAFRet

Titan
Moderator
And what really is the best way to make a 1 to 1 copy of the Windows SSD saving all settings, services and ect. I have made to it ? For example if I do buy another SSD and I just don't wanna clean install on it ?
I use Macrium Reflect for this.

A Full image, followed by a rolling series of Incremental or Differential. Whatever schedule you set.
A complete representation of the whole drive.

Write it out to some other storage device.
Create a Macrium RescueUSB, and save it somewhere safe. I have mine stashed in the bottom of the PC case.
 
Thank you so much. This was honestly a good read and finally cleared it up for me. :D
So by math and stats Im guessing it can last atleast a year WITHOUT the unfortunate things that could happen considering I have "written" 31TB in 2 years and considering drive is rated for 80 ?
And what really is the best way to make a 1 to 1 copy of the Windows SSD saving all settings, services and ect. I have made to it ? For example if I do buy another SSD and I just don't wanna clean install on it ?
Like so many things, life expectancy of an SSD is from now to forever, one can never tell because it can die next minute. That's where backup comes in. Reliable backup because lost data value can far outstrip value of disk itself. Disk, windows, programs/games you can easily replace but baby pictures never !
Software for backup are aplenty, my Favorite and arguably best and fastest is Macruim reflect free. It can back up whole disk or part of it into one single file which you could save to aother disk (best if it's kept offline) or even copied to another place, also can bi browsed and only parts copied from it. Size of resulting file is about 75% of whole space used on original disk so it can fit in a smaller disk.
Once you have it and have made Restoration boot disk or USB you can have whole backup (including OS if you so choose) back on a new disk in about 15 minutes and everything will be back to same state as when backup was made.
 
Last edited:
The bottom end of the SMART report is missing. I suspect that there would be an attribute which would account for CrystalDiskInfo's 63% health assessment.

If the drive transparently compresses the data, then that could partly account for the low NAND writes when compared with the host writes (in addition to SLC caching).
 
Last edited:
Mar 22, 2021
7
0
10
The bottom end of the SMART report is missing. I suspect that there would be an attribute which would account for CrystalDiskInfo's 63% health assessment.

If the drive transparently compresses the data, then that could partly account for the low NAND writes when compared with the host writes (in addition to SLC caching).
View: https://imgur.com/2sNyInq

There we go. I scrolled all the way down. Anything connecting the lower health % with that ?
 
The Media Wearout Indicator appears to consist of 3 parts -- 0x2506 / 0x0D28 / 0x2506. However, I don't know how to interpret these numbers.

Have you tried WD's SSD Dashboard?

CrystalDiskInfo's health score would be consistent with a drive that has executed 37TB host writes out of 100TBW. Perhaps CrystalDiskInfo uses 100TBW as the default rating for capacities of 250GB??

Edit:

I think I might be able to make sense of the wearout numbers, but I need to see how WD's SSD Dashboard reports this attribute. I'm guessing that the two numbers might reflect the wear in terms of % P/E cycles and % TBW (host writes). If we divide each number by 0x100 (= 256), we get 0xD and 0x25 (13% and 37%). It could just be a wild coincidence, though. We would need the raw data reported by CrystalDiskInfo at the same time in order for the comparison to be valid.

My searches suggest that this SSD uses 2D TLC NAND, and the endurance ranges from 300 to 500 P/E cycles. An average of 67 P/E cycles represents 13% of 500.
 
Last edited:
Mar 22, 2021
7
0
10
The Media Wearout Indicator appears to consist of 3 parts -- 0x2506 / 0x0D28 / 0x2506. However, I don't know how to interpret these numbers.

Have you tried WD's SSD Dashboard?

CrystalDiskInfo's health score would be consistent with a drive that has executed 37TB host writes out of 100TBW. Perhaps CrystalDiskInfo uses 100TBW as the default rating for capacities of 250GB??

Edit:

I think I might be able to make sense of the wearout numbers, but I need to see how WD's SSD Dashboard reports this attribute. I'm guessing that the two numbers might reflect the wear in terms of % P/E cycles and % TBW (host writes). If we divide each number by 0x100 (= 256), we get 0xD and 0x25 (13% and 37%). It could just be a wild coincidence, though. We would need the raw data reported by CrystalDiskInfo at the same time in order for the comparison to be valid.

My searches suggest that this SSD uses 2D TLC NAND, and the endurance ranges from 300 to 500 P/E cycles. An average of 67 P/E cycles represents 13% of 500.
WD's Dashboard says 63% aswell. I don't see "Media Wearout" in the dashboard anywhere though.
 
See https://forums.anandtech.com/threads/wd-ssd-dashboard-is-broken-garbage-return-drive.2520511/

The screenshots show a "Media Wear Out Indicator" with a percentage value in place of the raw value.

20170913001308WDSSDD.png