Question Crucial MX500 500GB SATA SSD - - - Remaining Life decreasing fast despite only a few bytes being written to it ?

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
The Remaining Life (RL) of my Crucial MX500 ssd has been decreasing rapidly, even though the pc doesn't write much to it. Below is the log I began keeping after I noticed RL reached 95% after about 6 months of use.

Assuming RL truly depends on bytes written, the decrease in RL is accelerating and something is very wrong. The latest decrease in RL, from 94% to 93%, occurred after writing only 138 GB in 20 days.

(Note 1: After RL reached 95%, I took some steps to reduce "unnecessary" writes to the ssd by moving some frequently written files to a hard drive, for example the Firefox profile folder. That's why only 528 GB have been written to the ssd since Dec 23rd, even though the pc is set to Never Sleep and is always powered on. Note 2: After the pc and ssd were about 2 months old, around September, I changed the pc's power profile so it would Never Sleep. Note 3: The ssd still has a lot of free space; only 111 GB of its 500 GB capacity is occupied. Note 4: Three different software utilities agree on the numbers: Crucial's Storage Executive, HWiNFO64, and CrystalDiskInfo. Note 5: Storage Executive also shows that Total Bytes Written isn't much greater than Total Host Writes, implying write amplification hasn't been a significant factor.)

My understanding is that Remaining Life is supposed to depend on bytes written, but it looks more like the drive reports a value that depends mainly on its powered-on hours. Can someone explain what's happening? Am I misinterpreting the meaning of Remaining Life? Isn't it essentially a synonym for endurance?


Crucial MX500 500GB SSD in desktop pc since summer 2019​
Date​
Remaining Life​
Total Host Writes (GB)​
Host Writes (GB) Since Previous Drop​
12/23/2019​
95%​
5,782​
01/15/2020​
94%​
6,172​
390​
02/04/2020​
93%​
6,310​
138​
 
  • Like
Reactions: demonized

Rogue Leader

It's a trap!
Moderator
I see no suggestion in that post, nor any relevance to this thread's "excessively high Write Amplification" topic. Perhaps you misunderstood which bug s/he was referring to where s/he wrote about F8 increasing when "this" bug happened. I believe s/he meant the "pending sector" bug in the unraid forum thread, not the "high WAF" bug in our thread.

F8 is used to calculate write amplification. While it turns out this suggestion is irrelevant to your issue, possibly because the person posting didn't read through the whole thread, I don't see him/her asking for help only mentioning a bug/behavior. We don't just delete posts like this. Just tell him/her its unrelated and lets move on. Thanks.
 

fc5

Prominent
Mar 6, 2020
3
0
510
I have already read this thread

The value of C5 in CrstalDiskInfo frequently changes to 1 or 0
When the value of C5 is 1, F8 increases several tens times faster than usual
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
I have already read this thread

The value of C5 in CrstalDiskInfo frequently changes to 1 or 0
When the value of C5 is 1, F8 increases several tens times faster than usual

Are you saying:
F8 ALWAYS increases much faster than usual when C5 is 1.
F8 NEVER increases much faster than usual when C5 is 0.
If that's what you mean, it would be interesting.

According to SMARTCTL.exe (which comes with Smartmontools) C5 is known to be bogus on the MX500. The name that SMARTCTL.exe gives to the C5 attribute is "Bogus_Current_Pend_Sect" and the result of the 'smartctl -a c:' command includes the following: "WARNING: This firmware returns bogus raw values in attribute 197."
 
Last edited:

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
The good trend that I wrote about yesterday at 9:49am has continued. For the last 48 hours, all of the 2-hour WAFs have been less than 2. Also, for the last 48 hours, none of the FTL write bursts has exceeded 37,000-ish pages, and the average time between bursts has increased significantly, sometimes taking as long as 6 or 7 hours between bursts.

Crucial tech support has finally agreed to replace the ssd. Since I'm cynical, I'm wondering how I will be able to verify that the replacement drive will really be new. I assume they could take a used ssd and rewrite its counters to zeros, and program it with a new serial number, so that it would appear to be new even though it may actually have a lot of block wear and/or high temperature abuse. Is there a way for a customer to verify whether an ssd is new?
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
I've modified my ssd monitor .bat file so that it also logs the C5 attribute's digit. It's logging every 5 seconds. Perhaps I'll see a correlation between C5 changing and the FTL write bursts.
 
Last edited:

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Running my ssd monitor .bat file with a 5-second logging interval allowed me to confirm fc5's observation that the brief change of attribute C5 from 0 to 1 to 0 correlates with FTL write bursts. However, C5 goes back to 0 VERY quickly: C5 is 1 in only one of the log entries. So I can't yet determine whether the correlation is perfect. There was another FTL burst during the 3+ hours of logging (10:28am to now), and the log shows C5=0 then, but it's possible that a 1 wasn't recorded then because the burst lasted less than 5 seconds. To have a chance to check whether the correlation is perfect, I would need to reduce the log interval to less than 5 seconds. (I just began running a new log with a 2-second interval, which should be short enough to catch every C5=1 if my guess below is correct. )

Here's a brief excerpt of the monitor log, which shows values recorded before, during and after the brief C5=1 (the rightmost column is C5):
Sat​
03/07/2020​
11:42:37.26​
1.51​
0.38​
27​
14​
0​
Sat​
03/07/2020​
11:42:42.34​
1.5​
0.26​
22​
11​
0​
Sat​
03/07/2020​
11:42:47.32​
1.52​
0.39​
34​
18​
0​
Sat​
03/07/2020​
11:42:52.31​
1.46​
0.37​
26​
12​
0​
Sat​
03/07/2020​
11:42:57.28​
1.41​
0.27​
24​
10​
0​
Sat​
03/07/2020​
11:43:02.35​
3466.28​
0.08​
7​
24257
1
Sat​
03/07/2020​
11:43:07.28​
680.63​
0.3​
19​
12913
0​
Sat​
03/07/2020​
11:43:12.32​
1.8​
0.05​
5​
4​
0​
Sat​
03/07/2020​
11:43:17.25​
1.02​
0.39​
34​
1​
0​
Sat​
03/07/2020​
11:43:22.26​
1.1​
0.59​
38​
4​
0​
Sat​
03/07/2020​
11:43:27.31​
1.04​
0.28​
25​
1​
0​
Sat​
03/07/2020​
11:43:32.26​
1.75​
0.4​
28​
21​
0​
I'll guess that C5 becomes 1 at the start of each FTL write burst and returns to 0 at the end of the burst. Since the burst shown above didn't complete within the 5-second interval in which it started, this guess would explain why C5 was 1 when the SMART data was logged at 11:43:02... the burst was ongoing at the moment when the .bat file read the SMART data.

Here's an excerpt from around the other burst (the earlier of the two bursts). It doesn't disprove the guess since this burst appears to have started and finished entirely within a 5-second interval, so no burst was ongoing while the SMART data was read:
Sat​
03/07/2020​
11:02:27.27​
1.23​
2.13​
72​
17​
0​
Sat​
03/07/2020​
11:02:32.34​
3.12​
0.14​
8​
17​
0​
Sat​
03/07/2020​
11:02:37.29​
2.66​
0.17​
9​
15​
0​
Sat​
03/07/2020​
11:02:42.38​
1.22​
0.25​
22​
5​
0​
Sat​
03/07/2020​
11:02:47.35​
1.6​
0.26​
23​
14​
0​
Sat​
03/07/2020​
11:02:52.29​
4.66​
0.04​
3​
11​
0​
Sat​
03/07/2020​
11:02:57.32​
5​
0​
1​
4​
0​
Sat​
03/07/2020​
11:03:02.54​
4126.77​
0.17​
9​
37132
0​
Sat​
03/07/2020​
11:03:07.24​
1.5​
0.29​
26​
13​
0​
Sat​
03/07/2020​
11:03:12.29​
1.16​
0.27​
24​
4​
0​
Sat​
03/07/2020​
11:03:17.23​
1.02​
0.46​
36​
1​
0​
Sat​
03/07/2020​
11:03:22.26​
1.16​
0.3​
25​
4​
0​
Sat​
03/07/2020​
11:03:27.31​
1.28​
0.32​
28​
8​
0​
Sat​
03/07/2020​
11:03:32.26​
2.07​
0.19​
13​
14​
0​
Sat​
03/07/2020​
11:03:37.31​
2​
0.4​
16​
16​
0​

My 2-hour WAF log shows WAF was 3.01 during the 9:23am-11:23am 2-hour period, and was 2.02 during 11:23am-13:23pm. This bump in WAF followed a two day series of WAFs less than 2. According to HWiNFO, the pc read approximately 100 GB from the ssd between 9:23 and 11:23. A backup of the ssd occurred sometime between 9:23 and 11:23, which explains the 100 GB read, and I think the backup overlapped the 5-second logging. Not sure whether there's a clue here about what causes excessive FTL write bursts; these two bursts might not have been excessive since WAFs of 3.01 and 2.02 seem reasonably small.

I should note that to avoid dividing by zero when calculating WAF, the 5-second monitor has been altering deltaF7 to 1 when it's really 0. The deltaF7=1 shown at 11:02:57 was probably actually 0 (which would agree with the Host MBs Written=0 shown at that time), and the corresponding 5-second WAF was probably actually infinity. (Now that I'm thinking about it, there's a better solution: leave deltaF7 at 0, skip the divide by zero step, and set WAF to a huge constant like 99999999 that will represent infinity.)
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
To test the correlation of the SMART C5 attribute with the huge FTL write bursts, I ran my Monitor .bat app with a 2-second logging interval for awhile. The log data strongly suggests a perfect correlation, where C5 becomes 1 at the start of each huge FTL write burst and returns to 0 at the end of each burst.

The cause & effect that's responsible for the correlation is unclear. Does the controller activity associated with the FTL burst somehow cause C5 to erroneously become 1 without an actual read error involved? Or does the Pending Sector "read error" somehow trigger the FTL burst, perhaps as an attempt by the ssd to compensate for read errors by moving hard-to-read data to other blocks? If the former, it's a bug that seems ridiculous, and the high WAF "bug" would presumably be an independent bug. If the latter, perhaps there's an occasional genuine read error (which might mean Crucial's hardware design pushes the NAND beyond the speed limit of its slowest pages and deals with occasional errors caused by the excessive speed), and the bit that signaled the read error fails to be reset as soon as it should be, causing a series of false positives, to which the controller continues to respond by moving data unnecessarily. If the problem is an error bit that's not reset as soon as it should be, I think it could be caused either by a firmware bug or a hardware design error. (I don't have enough knowledge about the hardware to rule out the possibility of a hardware design error.)

Below is the log, which shows SMART data for the intervals during which FTL Pages Written was 1000 or more. (Serendipitously, the .bat code had a Daylight Savings Time bug that caused extra log entries -- much faster than one per 2 seconds -- at around 3:03am Sunday morning, and those extra entries provide useful, more fine-grained information about the correlation. Nevertheless, I fixed the bug.) The log shows that the C5 "bit" becomes 1 during the first log interval of each huge FTL burst (presumably at the moment when the burst begins) and returns to 0 during the final log interval of each FTL burst (presumably at the moment when the burst ends). This agrees with what we would expect with a perfect correlation.

Day Date Time WAF HostWritesMB HostPages FTLPages C5
Sat 03/07/2020 15:03:00.33 556.00 0.04 3 1665 1
Sat 03/07/2020 15:03:02.33 382.44 0.67 38 14495 1
Sat 03/07/2020 15:03:04.35 470.36 0.64 33 15489 1
Sat 03/07/2020 15:03:06.33 2748.00 0.03 2 5494 0

Sat 03/07/2020 19:00:38.35 102.62 0.35 24 2439 0

Sat 03/07/2020 22:02:58.31 265.54 1.08 37 9788 1
Sat 03/07/2020 22:03:00.30 440.58 1.03 36 15825 1
Sat 03/07/2020 22:03:02.25 605.26 0.22 19 11481 0

Sun 03/08/2020 1:42:58.33 200.81 1.44 69 13787 1
Sun 03/08/2020 1:43:00.32 246.80 1.24 55 13519 1
Sun 03/08/2020 1:43:02.32 615.37 0.18 16 9830 0

Sun 03/08/2020 3:02:57.41 99999.99 0.00 0 2673 1
Sun 03/08/2020 3:02:57.79 1030.00 0.03 2 2058 1
Sun 03/08/2020 3:02:58.13 99999.99 0.00 0 2689 1
Sun 03/08/2020 3:02:58.51 72.54 1.00 33 2361 1
Sun 03/08/2020 3:02:58.85 99999.99 0.00 0 2697 1
Sun 03/08/2020 3:02:59.21 99999.99 0.00 0 2689 1
Sun 03/08/2020 3:02:59.57 99999.99 0.00 0 2697 1
Sun 03/08/2020 3:02:59.91 64.02 1.00 34 2143 1
Sun 03/08/2020 3:03:00.26 338.50 0.05 6 2025 1
Sun 03/08/2020 3:03:00.63 99999.99 0.00 0 2689 1
Sun 03/08/2020 3:03:00.99 99999.99 0.00 0 2697 1
Sun 03/08/2020 3:03:01.36 473.20 0.12 5 2361 1
Sun 03/08/2020 3:03:01.72 99999.99 0.00 0 2689 1
Sun 03/08/2020 3:03:02.10 99999.99 0.00 0 2697 1
Sun 03/08/2020 3:03:02.68 159.15 0.37 13 2056 0

Sun 03/08/2020 5:39:36.27 117.19 0.23 21 2440 0

Sun 03/08/2020 7:22:58.30 3685.00 0.00 1 3684 1
Sun 03/08/2020 7:23:00.28 3239.40 0.06 5 16192 1
Sun 03/08/2020 7:23:02.50 499.24 0.50 29 14449 1
Sun 03/08/2020 7:23:04.25 85.00 0.62 33 2772 0


The log also includes two mini-bursts, each of approximately 2400 FTL pages written, which aren't "huge" FTL bursts. (The other log, which is complete, shows those two entries were for 2-second intervals, not short intervals caused by the Daylight Savings Time bug.) It's unclear whether a different background process causes the mini-bursts, and it's unclear whether C5 briefly becomes 1 during mini-bursts because the mini-bursts are quick enough that they might have completed before the SMART data for the interval was read/logged.

Some of the intervals show WAF = 99999.99, which represents infinity. No Host Pages were written during those very brief intervals.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Interesting result
If the high WAF is due to a firmware bug, I hope the firmware is improved

I sent Crucial tech support an email about the correlation, and requested they forward it to their firmware group. Perhaps it will be a clue that helps them quickly find and fix the bug.

Crucial tech support eventually agreed to exchange the ssd for a new one. This raises my earlier unanswered question: Will be possible for me to distinguish between a genuinely new ssd and a refurbished ssd that's had its SMART counters and serial number reset? (I wouldn't want to exchange my ssd for one that's in worse condition.)

I haven't yet asked them to proceed with the replacement procedure, since they haven't yet agreed to ship the replacement ssd before I send them the rma ssd. If they don't agree, my pc would be non-functional for days or weeks while I have no ssd... a hardship that I would need to prepare for.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Below is a table that summarizes how effective the ssd selftesting has been at slowing down the loss of ssd Remaining Life. Each row of the table shows data logged on days that Average Block Erase Count increased. (It wasn't until early February that I became systematic enough to log every increment of ABEC. The earliest values were actually days when Remaining Life dropped, and I've assumed ABEC on those days was 15 x 100%-RL%.)

The best measure of the effectiveness of ssd selftesting is probably "ssd life used per TB written by the host pc." That's the focus of this table. The rightmost column shows how ssd life was used up relatively slowly when the ssd was young, then much faster beginning near the end of 2019, and has slowed down again due to the selftesting, which began on Feb 22.

A duty cycle of 19.5 minutes of selftests of each 20 minutes has been very effective. I began a weeks-long experiment with 19.5m/20 on March 1st. The selftesting was interrupted for only an hour or two, due to Windows update restarts and a weird pc glitch. Late on March 20th I changed the duty cycle to 880 seconds of each 900 seconds, but after 3.5 days I changed it back to 19.5m/20 because WAF increased significantly. (WAF averaged 6.59 with 880s/900. I have no theory why 880s/900 had that effect; perhaps I'll try it again someday to see whether it was a fluke.)

A nonstop duty cycle was even more effective (during the 20 hours I let it run, on Feb 24); WAF averaged 1.43. However, as noted in an earlier post, I was concerned that nonstop could be risky for the ssd's health because nonstop might prevent a vital ssd background process from getting enough runtime. I have much less concern about the 19.5m/20 duty cycle because a majority of the 30 second "idle" pauses between selftests have very little FTL NAND page writing... in other words, no sign of any pent-up process during a majority of the pauses.
Selftests duty cycle, and other comments​
Date​
Total Host Writes (GB)​
Average Block Erase Count (ABEC)​
ΔHost Writes (GB)
1 row​
ΔABEC
1 row​
Life Used (%)
per TB written by host pc
new​
07/28/2019​
0
0
none​
08/31/2019​
1,772​
15
1,772​
15​
.58
none​
12/23/2019​
5,782​
75
4,010​
60​
1.02
none​
01/15/2020​
6,172​
90
390​
15​
2.63
none​
02/04/2020​
6,310​
105
138​
15​
7.42
none​
02/09/2020​
6,342​
109
32​
4​
8.53
none​
02/12/2020​
6,365​
110
23​
1​
2.97
none​
02/14/2020​
6,390​
111
25​
1​
2.73
none​
02/15/2020​
6,393​
112
3​
1​
22.76
none​
02/16/2020​
6,404​
113
11​
1​
6.21
none​
02/18/2020​
6,416​
114
12​
1​
5.69
none​
02/19/2020​
6,422​
115
6​
1​
11.38​
none​
02/20/2020​
6,429​
116
7​
1​
9.75
none​
02/22/2020​
6,442​
117
13​
1​
5.25
various selftest experiments began 2/22
02/28/2020​
6,495​
118
53​
1​
1.29
mostly 19.5m/20
03/05/2020​
6,548​
119
53​
1​
1.29
19.5m/20
03/13/2020​
6,647​
120
99​
1​
.69
19.5m/20 and 880s/900
03/25/2020​
6,749​
121
102​
1​
.67
 
Last edited:

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
This is a 90 day update regarding the ssd selftesting regimen begun on 3/01/2020. The selftest duty cycle has been 19.5 minutes of every 20 minutes beginning on 3/01, except for 4 days (3/20 to 3/24) when it was 880 seconds of every 900 seconds, and except for five brief power-off shutdowns.

To summarize: WAF was 2.61 during the 90 days, Remaining Life is decreasing at only 2% per year, and ssd speed is not hurt at all by selftests. I see no reason to stop the selftesting or alter the selftest duty cycle.

1. During the 90 day period the ssd's Write Amplification Factor was 2.61:
Date​
S.M.A.R.T.
F7​
S.M.A.R.T.
F8​
ΔF7​
ΔF8​
WAF
= 1 +
ΔF8/ΔF7​
03/01/2020
226,982,040
1,417,227,966
05/30/2020
258,722,752
1,468,308,748
31,740,712​
51,080,782​
2.61

2. The ssd's Average Block Erase Count (ABEC) reached 119 on 3/05 and reached 125 on 5/16. That's an increase of 6 in 72 days... an average of 1 every 12 days. (At this rate I expect it will reach 126 soon.) Since a 15 increase of ABEC corresponds to a 1% decrease of the ssd's Remaining Life, it implies Remaining Life is decreasing at a rate of about 2% per year (assuming the rate of ABEC increases holds steady). This corresponds to a 50 year lifespan (if I'd had the foresight to start the selftesting regimen when the ssd was new, and if I'd moved frequently written temporary files from ssd to hard drive when the ssd was nearly new).

3. Using CrystalDiskMark, I measured the ssd speed while a selftest was running, and I conclude that selftests do not hurt speed:
------------------------------------------------------------------------------
CrystalDiskMark 7.0.0 x64 (C) 2007-2019 hiyohiyo
Crystal Dew World: https://crystalmark.info/
------------------------------------------------------------------------------
  • MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
  • KB = 1000 bytes, KiB = 1024 bytes

[Read]
Sequential 1MiB (Q= 8, T= 1):
564.091 MB/s [ 538.0 IOPS] < 14842.31 us>
Sequential 1MiB (Q= 1, T= 1): 541.069 MB/s [ 516.0 IOPS] < 1936.56 us>
Random 4KiB (Q= 32, T=16): 406.167 MB/s [
99161.9 IOPS] < 5137.78 us>
Random 4KiB (Q= 1, T= 1): 35.740 MB/s [ 8725.6 IOPS] < 114.20 us>

[Write]
Sequential 1MiB (Q= 8, T= 1):
520.675 MB/s [ 496.6 IOPS] < 16026.44 us>
Sequential 1MiB (Q= 1, T= 1): 499.781 MB/s [ 476.6 IOPS] < 2096.14 us>
Random 4KiB (Q= 32, T=16): 380.489 MB/s [
92892.8 IOPS] < 5489.37 us>
Random 4KiB (Q= 1, T= 1): 90.045 MB/s [ 21983.6 IOPS] < 45.20 us>

Profile: Default
Test: 256 MiB (x3) [Interval: 5 sec] <DefaultAffinity=DISABLED>
Date: 2020/04/18 9:39:21
OS: Windows 10 [10.0 Build 18362] (x64)
------------------------------------------------------------------------------

The measured speeds highlighted in blue slightly exceed the speeds listed in Crucial's specs ( https://www.crucial.com/products/ssd/crucial-mx500-ssd ) for a fresh-out-of-the-box MX500 500GB 2.5" SATA drive. This means selftesting doesn't hurt speed. (My hunch is that a selftest is a low priority process that the ssd can pause instantly when the host pc requests to read or write. The slight improvement of speed, if not a fluke, might be due to the selftesting causing the ssd to avoid low power mode, eliminating the delay while the ssd switches from low power to normal power... but this is pure speculation.)

4. The ssd's average temperature has ranged from 40C to 43C, depending on the ambient room temperature. (I expect it will increase a little during the summer, since to save money I rarely use the air conditioner and the ambient temperature tends to reach 80F. Currently the ambient temperature is 74F and the ssd is 43C.)

5. The only downside I'm aware of is the approximately 1 watt of extra power consumption that I mentioned in a previous post.

Unless the ssd starts behaving differently, this update will probably be my final post in this thread. Unless someone has questions or comments.
 
Jun 16, 2020
5
0
10
Hi Lucretia19

I must congratulate you on the extensive testing you have done on the Crucial MX 500 SSD.

I also have this exact same hard drive and I could not understand why the life expectancy was dropping like that, like 8% in 6 months.

Is it possible that you could send me a copy of that .bat file so that I can run it on my computer to solve the problem.

I am not an expert but if you can give me basic guidelines of what I should do then I can enable it on my hard drive.

Looking forward to your assistance and reply, thank you.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
@NamZIX:

Do you mean the ssd Remaining Life is now 92%, and that the ssd is 6 months old?

Before you conclude that 8% in 6 months is due to the bug I described, you need to check how many bytes your pc has written to the ssd during those 6 months. Decrease of ssd Remaining Life due to writes by the host pc is normal, since each cell of an ssd can be written only a finite number of times. Crucial's specs say the MX500 500GB endurance is 180 TBytes. Since 8% of 180 TB is approximately 14.4 TB, if your pc has written approximately 14 TB during those 6 months then you're getting what should be expected.

One caveat. I'm uncertain whether the 180 TB spec means bytes written by the host pc, or the sum of the bytes written by the host pc and by the ssd's FTL controller. Both of those numbers -- bytes written by the host pc and bytes written by the FTL controller -- can be displayed by free software such as CrystalDiskInfo or Smartmontools... any software capable of monitoring S.M.A.R.T. attributes. The bug I described causes excessive writes by the FTL controller, causing Remaining Life to decrease much faster than it should decrease. Record those two numbers, and then a few days later record them again to see how much each increased, and let me know the numbers. (The S.M.A.R.T. software will show you NAND pages written rather than bytes written, but that's fine since what matters is the ratio of the two increases. In other words, NAND pages written by the FTL controller should not be much much larger than NAND pages written by the host. During the weeks before I tamed the ssd with my selftests .bat file, the ratio was about 38 to 1. During the months that the selftests have been running, the ratio has been about 1.6 to 1. Note: Crucial defines "Write Amplification Factor" as 1 plus that ratio.)

Here's a simplified version of my .bat file. You would put this file and the smartctl.exe utility of Smartmontools in a folder named C:\fix_Crucialssd and run the .bat file with Administrator privileges. If you put the two files in a different folder, edit the .bat accordingly. If your ssd isn't C:, edit the .bat accordingly. You can use Windows Task Scheduler to have the .bat file start automatically when Windows starts, or when a user logs in. (If it starts when Windows starts, it will be hidden and won't appear in your taskbar.) In the Task Scheduler dialog box, be sure to check the checkbox labeled "Run With Highest Privileges."

Code:
@echo off
rem  Edit PROGDIR variable to be the folder containing smartctl.exe
set "PROGDIR=C:\fix_Crucialssd"

rem  Edit SSD variable, if needed, to be the ID of your Crucial ssd
set "SSD=C:"

rem  For simplicity assume smartctl.exe takes 4 secs to start selftest
set /A "PauseSeconds=26, SelftestSeconds=1170"

set "PROG=%PROGDIR%\smartctl.exe"

rem  Infinite loop:
FOR /L %%G in (0,0,0) do (
   rem  Start a selftest with 5 maximal ranges selected
   %PROG% -t select,0-max -t select,0-max -t select,0-max -t select,0-max -t select,0-max -t force %SSD%
   TIMEOUT /t %SelftestSeconds% /NOBREAK
   rem  Abort the selftest
   %PROG% -X %SSD%
   TIMEOUT /t %PauseSeconds% /NOBREAK
)
 
  • Like
Reactions: StrikerFX
Jun 16, 2020
5
0
10
Sorry for not providing more information.
I actually have a 1TB MX500 that should be able to have an endurance of 360 TBytes.

Total TBytes written so far during the 6 month period is only 6.47 TBytes.

My computer stays on 24/7 and is maybe only rebooted once every week.

I also get that error where the "Current Sector Pending Count" jumps from 0 to 1 and then back to 0 in a continues cycle about 10 times per day.

I am sure there is definitely a bug in the firmware as the life expectancy lowering so fast is not normal, and through the tests you have done I am sure my SSD has got the same problem.

You can see a screenshot of my drive here from Crystal Disk and Hard Disk Sentinal:

https://drive.google.com/file/d/11jWAAtd13UwiYAdw9x6sB2ZXgAoMht77/view?usp=sharing

https://drive.google.com/file/d/1FRxzfjRYeNH0mHDlEIMcBJyL5-hOaEtr/view?usp=sharing

I tried to recreate your table but I am not sure how to calculate the Total Host Writes and Total Amplified Writes in the table:

DateTotal Host Writes (GB)S.M.A.R.T. F7S.M.A.R.T. F8WAF = (F7+F8)/F7Total Amplified Writes (GB)
16 Jun 20245,460,0853,666,607,58915.9376938

Please have a look at these figures and let me know what you think.
 
Jun 16, 2020
5
0
10
I also noticed the power on time displayed by Hard Disk Sentinal is not at all correct for the SSD. It only stands now at 76 days where it should be much longer.

Also just to find out did you send your SSD back to Crucial for replacement?

If they did replace it does your replaced SSD have the same problem?
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
@NamZIX:

Yes, I'm convinced your ssd has the problem. Your WAF is very high.

Try running the .bat file (and let it keep running). After a few days, take another HD Sentinel snapshot, calculate the increases of the 247 & 248 attributes by subtracting the current 247 & 248 values from the 247 & 248 values a few days from now, and see if the ratio of the two increases is much smaller. The ratio of the increases will be the "WAF over the few days." I recommend you keep track of the date & time of each snapshot. (Perhaps the timestamp of the screencapture files will suffice, if you're careful not to modify the files.)

It appears Hard Disk Sentinel mislabels attribute 247. It should be labeled "host program page count," not "host program sectors count." CrystalDiskInfo has the correct label.

You can set Crystal to display the attributes in base 10 instead of base 16, to make it easier to do the arithmetic. (Note: Crystal labels 247 as "F7" and 248 as "F8" regardless of whether you set it to display in base 10.)

Does HD Sentinel have a logging feature? If you can set it to periodically log the attributes, it might save you some labor. If it can save the log in a format that can be opened by spreadsheet software, such as comma-delimited format (also known as csv format), that might save you even more labor.

On my 500GB drive, one sector is 512 bytes. It appears to be the same for you, since Crystal is also displaying "Total Host Writes = 6625 GB."

If you divide attribute 246 by attribute 247, you'll get the number of sectors per NAND page. It appears to be about 29,000 bytes/page. On my 500GB ssd, it's about 37,000 the same as yours. (EDIT: The 37,000 that I typed earlier is the 1x number of NAND pages of an FTL write burst, not the size of a NAND page. I glitched.)

I appreciate seeing those snapshots. The total bytes that your host pc has written to the ssd during those 6 months, less than 7 TB, is a small amount, and is very similar to the total bytes that my pc had written to my ssd during a similar period of time. In other words, a low average rate of writing from pc to ssd. This is one more data point consistent with my analysis that the Crucial bug is most noticeable for ssds that have a low rate of host writing. I guess most users write a lot more to their ssds, so their Remaining Life decreases only a little faster than it should, and they don't notice there's a problem. For ssds that are more heavily written to, the ratio of 248 to 247 will be smaller because the 247 divisor will be larger.
 
Last edited:

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
@NamZIX:

The Power On Hours attribute isn't "wrong" but is poorly named. Normally, the ssd frequently goes into a low power mode, and the time spent in low power mode isn't counted in the Power On Hours. So, 76 days is the amount of time that your ssd was "on but not in low power mode."

A side effect of the selftests is that the ssd will rarely enter low power mode because the ssd will almost always be busy -- doing a selftest or serving the pc -- and Power On Hours will start increasing nearly as fast as expected.

Another side effect of not going into low power mode is that the ssd will consume about 1 watt more power, and run a few degrees warmer.

On the plus side, its temperature will be more stable, which I think is a good thing. And it might actually improve read/write performance a little, by eliminating the delays of switching from low power mode to normal power mode to service read or write requests.

You might want to benchmark the read speed and write speed with and without a selftest running, to see whether the selftest affects your ssd's performance. Crucial uses CrystalDiskMark for their speed specs, using fresh new ssds (which perform faster than non-fresh ssds). Last month I ran CrystalDiskMark with a selftest running, and found it was slightly faster than Crucial's specs. But I didn't run CrystalDiskMark without a selftest running to verify whether the selftest has a positive effect, negative effect, or negligible effect.

No, I haven't sent my ssd for replacement. Several reasons: (1) I presume the replacement would have the same bug. (2) Mine still has 92% Remaining Life, and at the current low rate of writing and with its WAF tamed by the selftests, its Remaining Life won't reach zero for about 40 years. (3) Crucial told me they won't ship the replacement until after they receive my ssd, which means my pc would be inoperative for a week or so, which is unacceptable. (4) I don't trust Crucial; although they said they would send a new replacement, how would I be able to determine whether they send me a used one that has its counts reset to zero to appear new?
 
Last edited:
Jun 16, 2020
5
0
10
@Lucretia19

I have managed to get the following figures from Hard Disk Sentinel going back to when I installed the SSD.

As you can see the WAF increases over time due to the minimum usage of the SSD.

I will keep monitoring the SSD and will start running the .bat file that you provided.

DateTotal Host Writes (GB)S.M.A.R.T. F7S.M.A.R.T. F8WAF = (F7+F8)/F7Total Amplified Writes (GB)
21 Aug 19
7,801,228​
1,117,703​
1.14
8 Nov 19
76,874,106​
838,849,777​
11.91
9 Nov 19
77,375,075​
847,727,603​
11.96
10 Nov 19
77,994,442​
861,674,602​
12.05
11 Nov 19
78,709,819​
878,866,754​
12.17
12 Nov 19
79,378,151​
888,982,728​
12.20
13 Nov 19
80,267,626​
889,958,198​
12.09
14 Nov 19
80,645,635​
891,471,533​
12.05
15 Nov 19
81,167,732​
900,337,627​
12.09
16 Nov 19
81,704,630​
913,245,516​
12.18
17 Nov 19
82,300,240​
925,874,318​
12.25
18 Nov 19
82,928,948​
934,471,004​
12.27
19 Nov 19
83,834,954​
935,747,328​
12.16
20 Nov 19
84,425,819​
935,924,431​
12.09
21 Nov 19
84,939,213​
937,391,911​
12.04
22 Nov 19
85,462,809​
942,077,305​
12.02
23 Nov 19
85,947,363​
954,842,682​
12.11
24 Nov 19
86,324,766​
969,465,354​
12.23
25 Nov 19
87,006,953​
977,057,153​
12.23
26 Nov 19
87,429,613​
979,986,585​
12.21
27 Nov 19
87,910,715​
984,554,545​
12.20
28 Nov 19
88,549,642​
985,555,031​
12.13
29 Nov 19
89,176,419​
986,804,261​
12.07
30 Nov 19
89,507,230​
987,594,979​
12.03
1 Dec 19
89,725,778​
988,126,142​
12.01
2 Dec 19
90,180,102​
989,098,541​
11.97
3 Dec 19
90,682,356​
993,331,003​
11.95
4 Dec 19
91,212,414​
1,009,859,755​
12.07
5 Dec 19
91,661,329​
1,016,627,681​
12.09
6 Dec 19
92,286,341​
1,030,435,460​
12.17
7 Dec 19
92,851,923​
1,052,659,245​
12.34
8 Dec 19
93,409,213​
1,075,493,698​
12.51
9 Dec 19
94,319,852​
1,096,407,403​
12.62
10 Dec 19
95,057,635​
1,101,646,625​
12.59
11 Dec 19
96,261,190​
1,110,046,830​
12.53
12 Dec 19
96,760,572​
1,124,748,726​
12.62
13 Dec 19
97,111,901​
1,141,206,197​
12.75
6 Jan 20
97,434,668​
1,141,317,067​
12.71
7 Jan 20
98,162,992​
1,142,534,306​
12.64
8 Jan 20
98,787,149​
1,146,936,836​
12.61
9 Jan 20
99,815,624​
1,157,554,123​
12.60
10 Jan 20
100,422,428​
1,175,603,409​
12.71
11 Jan 20
100,682,927​
1,192,795,598​
12.85
12 Jan 20
100,972,772​
1,205,286,399​
12.94
13 Jan 20
101,470,824​
1,205,480,781​
12.88
14 Jan 20
101,978,930​
1,208,154,327​
12.85
15 Jan 20
102,808,276​
1,217,202,579​
12.84
16 Jan 20
103,256,964​
1,231,302,061​
12.92
17 Jan 20
103,582,034​
1,248,209,446​
13.05
18 Jan 20
103,847,589​
1,261,868,216​
13.15
19 Jan 20
104,095,993​
1,268,138,304​
13.18
20 Jan 20
104,606,825​
1,284,066,604​
13.28
21 Jan 20
105,053,903​
1,302,528,612​
13.40
22 Jan 20
105,442,513​
1,318,970,375​
13.51
23 Jan 20
105,885,273​
1,327,981,593​
13.54
24 Jan 20
106,205,768​
1,333,139,379​
13.55
25 Jan 20
106,539,900​
1,346,552,928​
13.64
26 Jan 20
106,871,445​
1,366,151,936​
13.78
27 Jan 20
107,380,437​
1,383,863,269​
13.89
28 Jan 20
107,839,200​
1,391,228,967​
13.90
29 Jan 20
108,539,172​
1,393,153,909​
13.84
30 Jan 20
109,323,961​
1,397,403,700​
13.78
31 Jan 20
109,619,595​
1,409,788,441​
13.86
1 Feb 20
109,892,358​
1,422,830,795​
13.95
2 Feb 20
110,166,729​
1,436,396,566​
14.04
3 Feb 20
110,533,153​
1,440,930,455​
14.04
4 Feb 20
110,884,028​
1,444,180,437​
14.02
5 Feb 20
111,507,836​
1,458,175,729​
14.08
6 Feb 20
112,097,665​
1,477,570,692​
14.18
7 Feb 20
112,730,494​
1,494,566,722​
14.26
8 Feb 20
113,199,158​
1,508,142,417​
14.32
9 Feb 20
114,159,586​
1,508,584,116​
14.21
10 Feb 20
114,657,288​
1,512,075,572​
14.19
11 Feb 20
115,308,108​
1,520,656,247​
14.19
12 Feb 20
115,783,979​
1,535,879,753​
14.27
13 Feb 20
116,248,692​
1,554,846,160​
14.38
14 Feb 20
116,590,254​
1,570,971,857​
14.47
15 Feb 20
117,009,456​
1,577,584,184​
14.48
16 Feb 20
117,400,558​
1,585,175,180​
14.50
17 Feb 20
117,942,139​
1,597,811,195​
14.55
18 Feb 20
131,155,904​
1,620,156,366​
13.35
19 Feb 20
131,524,829​
1,628,115,214​
13.38
20 Feb 20
131,777,917​
1,639,343,110​
13.44
21 Feb 20
131,923,472​
1,647,759,855​
13.49
22 Feb 20
132,223,236​
1,674,813,939​
13.67
23 Feb 20
134,416,725​
1,692,480,598​
13.59
24 Feb 20
141,254,421​
1,701,066,695​
13.04
25 Feb 20
141,780,918​
1,712,764,195​
13.08
26 Feb 20
142,864,348​
1,731,396,907​
13.12
27 Feb 20
143,741,057​
1,750,717,393​
13.18
28 Feb 20
144,279,685​
1,770,276,163​
13.27
29 Feb 20
144,782,999​
1,784,891,906​
13.33
1 Mar 20
145,232,771​
1,802,562,221​
13.41
2 Mar 20
146,019,362​
1,823,930,222​
13.49
3 Mar 20
147,277,613​
1,843,755,346​
13.52
4 Mar 20
148,392,022​
1,860,212,815​
13.54
5 Mar 20
148,955,046​
1,860,836,259​
13.49
6 Mar 20
150,138,400​
1,867,782,725​
13.44
7 Mar 20
150,562,544​
1,880,166,009​
13.49
8 Mar 20
152,848,502​
1,898,221,498​
13.42
9 Mar 20
153,841,154​
1,917,684,578​
13.47
10 Mar 20
154,630,555​
1,935,053,939​
13.51
11 Mar 20
155,899,907​
1,944,668,444​
13.47
12 Mar 20
157,821,221​
1,949,404,108​
13.35
13 Mar 20
159,507,796​
1,953,506,351​
13.25
14 Mar 20
160,619,992​
1,960,501,146​
13.21
15 Mar 20
161,832,459​
1,977,557,977​
13.22
16 Mar 20
163,316,301​
1,992,980,561​
13.20
17 Mar 20
165,488,070​
2,010,125,839​
13.15
18 Mar 20
167,043,217​
2,027,594,277​
13.14
19 Mar 20
168,303,163​
2,034,211,502​
13.09
20 Mar 20
168,864,286​
2,043,519,915​
13.10
21 Mar 20
170,467,481​
2,060,116,852​
13.09
22 Mar 20
170,772,485​
2,079,007,702​
13.17
23 Mar 20
171,116,140​
2,100,045,808​
13.27
24 Mar 20
171,496,825​
2,103,776,526​
13.27
25 Mar 20
173,859,453​
2,110,851,071​
13.14
26 Mar 20
174,152,514​
2,119,915,421​
13.17
27 Mar 20
174,540,638​
2,135,753,339​
13.24
28 Mar 20
175,036,572​
2,157,522,455​
13.33
29 Mar 20
175,555,585​
2,175,986,707​
13.39
30 Mar 20
176,313,727​
2,186,426,551​
13.40
31 Mar 20
176,694,282​
2,200,851,928​
13.46
1 Apr 20
177,140,112​
2,223,537,421​
13.55
2 Apr 20
177,683,316​
2,244,962,561​
13.63
3 Apr 20
179,295,377​
2,261,882,466​
13.62
4 Apr 20
179,701,078​
2,272,349,966​
13.65
5 Apr 20
180,062,199​
2,288,643,556​
13.71
6 Apr 20
180,515,683​
2,308,215,166​
13.79
7 Apr 20
180,866,243​
2,330,491,078​
13.89
8 Apr 20
182,224,113​
2,347,236,709​
13.88
9 Apr 20
182,600,216​
2,355,317,427​
13.90
10 Apr 20
182,920,003​
2,369,897,149​
13.96
11 Apr 20
183,356,227​
2,387,228,715​
14.02
12 Apr 20
183,688,328​
2,410,394,114​
14.12
13 Apr 20
183,914,681​
2,433,038,725​
14.23
14 Apr 20
184,184,721​
2,447,258,022​
14.29
15 Apr 20
184,885,626​
2,464,273,079​
14.33
16 Apr 20
185,309,874​
2,489,432,415​
14.43
17 Apr 20
187,802,457​
2,511,602,822​
14.37
18 Apr 20
197,149,417​
2,538,871,412​
13.88
19 Apr 20
197,430,074​
2,552,733,036​
13.93
20 Apr 20
198,168,949​
2,576,807,243​
14.00
21 Apr 20
198,509,042​
2,600,302,552​
14.10
22 Apr 20
199,733,764​
2,615,922,504​
14.10
23 Apr 20
200,503,230​
2,629,619,224​
14.12
24 Apr 20
201,409,957​
2,640,457,978​
14.11
25 Apr 20
202,693,148​
2,660,166,333​
14.12
26 Apr 20
203,905,490​
2,682,246,649​
14.15
27 Apr 20
204,337,717​
2,706,305,811​
14.24
28 Apr 20
205,144,631​
2,720,920,585​
14.26
29 Apr 20
205,758,007​
2,731,157,606​
14.27
30 Apr 20
207,093,370​
2,745,899,378​
14.26
1 May 20
208,332,647​
2,762,095,368​
14.26
2 May 20
209,436,230​
2,785,802,746​
14.30
3 May 20
210,452,845​
2,804,144,239​
14.32
4 May 20
211,549,133​
2,816,715,094​
14.31
5 May 20
212,386,642​
2,824,290,958​
14.30
6 May 20
212,793,938​
2,841,513,651​
14.35
7 May 20
213,706,950​
2,864,007,766​
14.40
8 May 20
214,397,540​
2,889,264,804​
14.48
9 May 20
215,106,577​
2,912,250,412​
14.54
10 May 20
215,832,104​
2,919,645,952​
14.53
11 May 20
216,548,106​
2,937,241,425​
14.56
12 May 20
217,423,579​
2,961,534,978​
14.62
13 May 20
218,319,581​
2,987,352,589​
14.68
14 May 20
219,258,186​
3,010,539,974​
14.73
15 May 20
220,281,335​
3,015,034,731​
14.69
16 May 20
221,583,395​
3,028,312,462​
14.67
17 May 20
222,532,102​
3,052,728,151​
14.72
18 May 20
223,202,318​
3,073,506,156​
14.77
19 May 20
223,937,690​
3,098,667,334​
14.84
20 May 20
224,519,211​
3,117,740,186​
14.89
21 May 20
226,897,895​
3,130,192,980​
14.80
22 May 20
227,254,980​
3,154,711,333​
14.88
23 May 20
227,727,278​
3,182,143,692​
14.97
24 May 20
228,205,152​
3,204,580,768​
15.04
25 May 20
228,953,796​
3,227,145,497​
15.10
26 May 20
229,542,409​
3,242,737,436​
15.13
27 May 20
230,165,806​
3,261,135,135​
15.17
28 May 20
230,809,907​
3,278,672,787​
15.21
29 May 20
231,683,382​
3,297,754,652​
15.23
30 May 20
232,393,004​
3,310,275,874​
15.24
31 May 20
232,810,404​
3,334,432,339​
15.32
1 Jun 20
233,492,555​
3,357,038,037​
15.38
2 Jun 20
234,154,928​
3,383,991,075​
15.45
3 Jun 20
234,731,979​
3,406,062,294​
15.51
4 Jun 20
235,420,514​
3,421,351,639​
15.53
5 Jun 20
236,096,655​
3,438,555,439​
15.56
6 Jun 20
236,613,518​
3,462,776,739​
15.63
7 Jun 20
237,166,604​
3,489,487,401​
15.71
8 Jun 20
237,885,571​
3,505,048,622​
15.73
9 Jun 20
238,855,266​
3,517,550,070​
15.73
10 Jun 20
239,485,067​
3,540,997,355​
15.79
11 Jun 20
240,141,739​
3,566,419,459​
15.85
12 Jun 20
240,799,957​
3,595,376,755​
15.93
13 Jun 20
241,635,113​
3,612,891,321​
15.95
14 Jun 20
242,232,473​
3,632,832,725​
16.00
15 Jun 20
243,439,706​
3,653,449,898​
16.01
16 Jun 20
246,220,239​
3,667,957,573​
15.90
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
@NamZIX: Yes, your ssd has it bad, like mine did. Also, note that those WAF numbers are cumulative numbers, each covering a period of time that goes all the way back to when the drive was new. If you calculate WAF over a more recent period, it's larger. For example, here's the calculation of WAF from May 16 to June 16 that shows your WAF has been nearly 27 during the most recent 31 days:
date
F7
F8
increase of F7
increase of F8
WAF
05/16/2020
221,583,395
3,028,312,462
06/16/2020
246,220,239
3,667,957,573
24,636,844​
639,645,111​
26.96
Your ssd's WAF is getting worse over time, just as my ssd's WAF was getting worse until I tamed it using selftests. (Day-to-day fluctuations of my ssd's WAF were large. In February before I began experimenting with selftests, one day it was over 90; another day it was slightly below 10.)

You're using the correct formula for WAF, since 1 + F8/F7 equals (F7+F8)/F7.

If your SMART log also includes the Average Block Erase Count attribute, you'll probably see that the number of days between increments of ABEC has been decreasing over time, indicating the problem is getting worse. Each 15 increments of ABEC corresponds to a 1% decrease of Remaining Life.

If your SMART log includes Remaining Life, you'll probably see that its rate of decrease has been accelerating.

I say "probably" because I haven't analyzed your F7 attribute to see whether your host pc's rate of writing to the ssd has been fairly constant. If for some reason the rate of host writing has been decreasing, that would mitigate the effect that the increasing WAF has on ABEC and Remaining Life. But there's obviously a limit to how much you could reduce the writing and still use the ssd productively. You're already writing very little to it (like me).

I congratulate you on having the foresight or good fortune to have logs going back several months. I didn't start tracking any ssd data until I came to believe there was a problem, in December when my ssd was about 4 months old, and it took a couple more months of assistance from the internet community and analysis before I began including in my logs all the relevant attributes.

By the way, the Crucial ssd has another bug, but I think it's unimportant: its "extended" S.M.A.R.T. attribute that's supposed to provide the total number of sectors read by the host pc is only a 32 bit number, and it rolls back to zero (like a car odometer does) each time it reaches its maximum number (a little over 4 billion). Perhaps using your logs you could reconstruct how many times it's rolled back to zero, if for some reason you want to know the true total sectors read.
 
By the way, the Crucial ssd has another bug, but I think it's unimportant: its "extended" S.M.A.R.T. attribute that's supposed to provide the total number of sectors read by the host pc is only a 32 bit number, and it rolls back to zero (like a car odometer does) each time it reaches its maximum number (a little over 4 billion). Perhaps using your logs you could reconstruct how many times it's rolled back to zero, if for some reason you want to know the true total sectors read.

Does the same 32-bit figure appear in the Device Statistics log? You can see this log in GSmartControl.
 
You can see the same info with smartctl (GP Log 0x04, page 1):

Code:
smartctl -l gplog,4,1 /dev/ice
smartctl -l smartlog,4,1 /dev/ice

Offset 0x28 is the number of Logical Sectors Read. The data size is 6 bytes.

This is what GSMartControl produces:

Code:
Device Statistics (GP Log 0x04)
Page  Offset Size        Value Flags Description
0x01  =====  =               =  ===  == General Statistics (rev 1) ==
0x01  0x008  4              22  ---  Lifetime Power-On Resets
0x01  0x010  4             542  ---  Power-on Hours
0x01  0x018  6       555153494  ---  Logical Sectors Written
0x01  0x020  6         6759607  ---  Number of Write Commands
0x01  0x028  6      2276610747  ---  Logical Sectors Read
0x01  0x030  6        15562928  ---  Number of Read Commands
0x01  0x038  6          730000  ---  Date and Time TimeStamp
0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
0x04  0x008  4               0  ---  Number of Reported Uncorrectable Errors
0x04  0x010  4               0  ---  Resets Between Cmd Acceptance and Completion
0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
0x05  0x008  1              25  ---  Current Temperature
0x05  0x020  1              40  ---  Highest Temperature
0x05  0x028  1              21  ---  Lowest Temperature
0x05  0x058  1              70  ---  Specified Maximum Operating Temperature
0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
0x06  0x008  4             328  ---  Number of Hardware Resets
0x06  0x010  4               0  ---  Number of ASR Events
0x06  0x018  4               0  ---  Number of Interface CRC Errors
0x07  =====  =               =  ===  == Solid State Device Statistics (rev 1) ==
0x07  0x008  1               0  N--  Percentage Used Endurance Indicator
                                |||_ C monitored condition met
                                ||__ D supports DSN
                                |___ N normalized value
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
@fzabkar: The smartctl -x command includes the Total Sectors Read 32-bit value in its results, which is how my logging .bat obtains it and all the other SMART attributes of interest. I don't understand why you asked me to check GSmartTools, if you're satisfied that it reports the same value that smartctl does.


That user got his/her money's worth: approximately 230 TB written to an MX300 ssd that has a 220 TB endurance rating. If my arithmetic is correct, the host pc wrote about 198 TB and the FTL controller wrote about 32 TB. An excellent WAF, perhaps because that ssd had a high rate of host writing: 198 TB in about 3 years. I don't know whether the MX300 can enter a low power mode the way the MX500 can, but assuming so, that user's 25130 PowerOnHours suggests the ssd was being written and/or read nearly non-stop.