Question Crucial MX500 500GB SATA SSD - - - Remaining Life decreasing fast despite only a few bytes being written to it ?

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
The Remaining Life (RL) of my Crucial MX500 ssd has been decreasing rapidly, even though the pc doesn't write much to it. Below is the log I began keeping after I noticed RL reached 95% after about 6 months of use.

Assuming RL truly depends on bytes written, the decrease in RL is accelerating and something is very wrong. The latest decrease in RL, from 94% to 93%, occurred after writing only 138 GB in 20 days.

(Note 1: After RL reached 95%, I took some steps to reduce "unnecessary" writes to the ssd by moving some frequently written files to a hard drive, for example the Firefox profile folder. That's why only 528 GB have been written to the ssd since Dec 23rd, even though the pc is set to Never Sleep and is always powered on. Note 2: After the pc and ssd were about 2 months old, around September, I changed the pc's power profile so it would Never Sleep. Note 3: The ssd still has a lot of free space; only 111 GB of its 500 GB capacity is occupied. Note 4: Three different software utilities agree on the numbers: Crucial's Storage Executive, HWiNFO64, and CrystalDiskInfo. Note 5: Storage Executive also shows that Total Bytes Written isn't much greater than Total Host Writes, implying write amplification hasn't been a significant factor.)

My understanding is that Remaining Life is supposed to depend on bytes written, but it looks more like the drive reports a value that depends mainly on its powered-on hours. Can someone explain what's happening? Am I misinterpreting the meaning of Remaining Life? Isn't it essentially a synonym for endurance?


Crucial MX500 500GB SSD in desktop pc since summer 2019​
Date​
Remaining Life​
Total Host Writes (GB)​
Host Writes (GB) Since Previous Drop​
12/23/2019​
95%​
5,782​
01/15/2020​
94%​
6,172​
390​
02/04/2020​
93%​
6,310​
138​
 
  • Like
Reactions: demonized

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Exactly. 5 years old.
Purchased Aug 6 2016, running 24/7 as the OS drive since then.

Yes, your Samsung ssd compares very favorably to what my Crucial MX500 ssd was doing before I intervened with the selftests regime. Your Samsung has dropped to 94% while your pc has written 68 TB to it. My MX500 dropped to 93% in its first 6 months, during which the pc wrote about 6.2 TB. The bigger issue was that, during the last 6 weeks of those 6 months, it dropped from 95% to 93% while the pc wrote only 528 GB... and the problem was getting worse: during the last 3 weeks of those 6 months it dropped from 94% to 93% while the pc wrote only 128 GB.

Your question about ssds that "actually died early" is overly narrow. MX500 ssds have been on the market for only a few years. My MX500 is too new to fit your narrow category, but it's obvious that at the rate it was losing life (before I intervened with selftests) it would eventually suffer a very early death, far short of its 180 TB duration spec.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Delta F8 on its own is indeed sufficient as an indicator for the FTL write bursts, but when I first created the code for plotting the data I was interested in assessing the impact on write amplification this issue had. At the moment these spikes only occasionally occur, but over time and no power cycling it can get very frequent.

Yep, this seemed also my case. On the graph from data logged 2019 that I posted in comment #162, where I converted program pages to MiB written, it can be seen that 1 GiB spikes were much more frequent than 2 GiB ones, and so on.

I didn't want to flood the forum with CSV data, but here is a selection according to such criteria (tab-separated values):

Code:
    datetime    current_pending_ecc_count    ftl_program_page_count_delta
30360    2021-07-28 05:36:22.942370    0    9.0
30361    2021-07-28 05:36:23.988706    0    3.0
30362    2021-07-28 05:36:25.032582    0    3.0
30363    2021-07-28 05:36:26.064069    1    323.0
30364    2021-07-28 05:36:27.095279    1    3.0
30365    2021-07-28 05:36:28.128490    1    1362.0
30366    2021-07-28 05:36:29.166417    1    339.0
30367    2021-07-28 05:36:30.204974    1    675.0
30368    2021-07-28 05:36:31.250364    1    339.0
30369    2021-07-28 05:36:32.296960    1    339.0
30370    2021-07-28 05:36:33.328545    1    2381.0
30371    2021-07-28 05:36:34.366672    1    339.0
30372    2021-07-28 05:36:35.413855    1    339.0
30373    2021-07-28 05:36:36.447333    1    3.0
30374    2021-07-28 05:36:37.485689    1    339.0
30375    2021-07-28 05:36:38.519172    1    1011.0
-snip-

I'm shocked by the very long durations of those four write bursts: 77 seconds, 69 seconds, 66 seconds, and 117 seconds (totaling about 330 seconds). I didn't realize that what I was asking for would result in a "flood" of csv data, because on my ssd the durations are much shorter, typically 5 seconds. On my ssd, I think I've never seen a burst that lasted longer than about 40 seconds.

I'm also surprised by the intermittent "deltaF8=3" rows while C5=1. Perhaps the write bursts were occasionally interrupted by a higher priority process, such as the pc requesting to read a lot of data.

I pasted those rows into a spreadsheet and summed the deltaF8 column. The sum of the four bursts is 186649 NAND pages, written over 330 seconds. That implies the data rate of your write bursts is much slower than mine... mine writes about 37000 pages in 5 seconds, and would take only about 25 seconds to write that amount. Why the big difference? Maybe we should also be looking at the host pc's reads and writes during the bursts, to try to determine whether higher priority processes are slowing your bursts' writing speed. Or perhaps the frequent smartctl requests (one per second) are having a bigger effect than just 3 extra NAND pages written; you could try reducing the logging rate to once every 5 seconds to see if that greatly increases the bursts' writing speed.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
I'm shocked by the very long durations of those four write bursts: 77 seconds, 69 seconds, 66 seconds, and 117 seconds (totaling about 330 seconds). I didn't realize that what I was asking for would result in a "flood" of csv data, because on my ssd the durations are much shorter, typically 5 seconds. On my ssd, I think I've never seen a burst that lasted longer than about 40 seconds.
Possibly their duration will depend on background/SSD activity.

I'm also surprised by the intermittent "deltaF8=3" rows while C5=1. Perhaps the write bursts were occasionally interrupted by a higher priority process, such as the pc requesting to read a lot of data.
I noticed that every time smartctl is invoked, the F8 attribute increases by 3 units. This occurs also when there are no mounted partitions from the SSD (thus no host writes). So, the logging process is also affecting somewhat the write amplification calculation. At a rate of 3 FTL pages written per second, this should be by about 8 GB/day.

I pasted those rows into a spreadsheet and summed the deltaF8 column. [...] perhaps the frequent smartctl requests (one per second) are having a bigger effect than just 3 extra NAND pages written; you could try reducing the logging rate to once every 5 seconds to see if that greatly increases the bursts' writing speed.
I do believe too that the frequent smartctl requests are having a negative impact. I could create a new log file with them reduced to one every 5 seconds. I'm also confident this will decrease the measured write amplification.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
-snip-
I noticed that every time smartctl is invoked, the F8 attribute increases by 3 units. This occurs also when there are no mounted partitions from the SSD (thus no host writes). So, the logging process is also affecting somewhat the write amplification calculation. At a rate of 3 FTL pages written per second, this should be by about 8 GB/day.

I wasn't commenting about the particular value=3; I was commenting about the tiny amount of writing that appears to happen intermittently while C5=1, when I would expect the bug to cause a reasonably constant high rate of writing (based on my observations of my own ssd).

8 GB/day is small but not negligible. I wouldn't want to log at that rate forever. (The only time I log once per second is during the 30 second pauses between selftests, which occur only once every 20 minutes. My other invocations of smartctl have much longer periods: every 20 minutes, every 2 hours, and daily.)

For the last 3+ hours, I logged once per second. Below are several lines of the log, which include a write burst that lasted about 5.5 seconds. (Note: 999999.99 represents infinite WAF, where deltaF7=0). DeltaF8 is the next-to-last column, and C5 is the last column. During the write burst, deltaF8 averaged about 6700 NAND pages per second, totaling 37190 pages. When it wasn't write-bursting, DeltaF8 averaged about 13 pages per second. (I don't know whether 3 of those ~13 pages were triggered by the logger's use of smartctl.)
Code:
Date,Time,TotalHostSectorsRd,TotalHostSectorsWr,TotalHostWrGB,TotalHostWrPages,TotalFTLPages,PowerOnHours,ABEC,PowerCycles, WAF, HostReadsMB,HostWritesMB,HostPages,FTLPages,C5
07/28/2021,9:03:09.37,1146777380,19406648286,9253,338796807,1675315478,13269,148,185, 4.25, 0.00,0.09,4,13,0
07/28/2021,9:03:10.40,1146777380,19406648518,9253,338796813,1675315494,13269,148,185, 3.66, 0.00,0.11,6,16,0
07/28/2021,9:03:11.33,1146777380,19406648518,9253,338796813,1675315508,13269,148,185, 999999.99, 0.00,0.00,0,14,0
07/28/2021,9:03:12.37,1146777380,19406648798,9253,338796818,1675315524,13269,148,185, 4.20, 0.00,0.13,5,16,0
07/28/2021,9:03:13.41,1146777380,19406648870,9253,338796820,1675315539,13269,148,185, 8.50, 0.00,0.03,2,15,0
07/28/2021,9:03:14.29,1146777380,19406648870,9253,338796820,1675315552,13269,148,185, 999999.99, 0.00,0.00,0,13,0
07/28/2021,9:03:15.27,1146777380,19406649558,9253,338796834,1675322316,13269,148,185, 484.14, 0.00,0.33,14,6764,1
07/28/2021,9:03:16.31,1146777460,19406650030,9253,338796843,1675329064,13269,148,185, 750.77, 0.03,0.23,9,6748,1
07/28/2021,9:03:17.33,1146777492,19406650582,9253,338796854,1675334796,13269,148,185, 522.09, 0.01,0.26,11,5732,1
07/28/2021,9:03:18.34,1146777492,19406650582,9253,338796854,1675342552,13269,148,185, 999999.99, 0.00,0.00,0,7756,1
07/28/2021,9:03:19.37,1146777492,19406650950,9253,338796861,1675349636,13269,148,185, 1013.00, 0.00,0.17,7,7084,1
07/28/2021,9:03:20.35,1146777492,19406651350,9253,338796869,1675352742,13269,148,185, 389.25, 0.00,0.19,8,3106,0
07/28/2021,9:03:21.33,1146777492,19406651814,9253,338796878,1675352753,13269,148,185, 2.22, 0.00,0.22,9,11,0
07/28/2021,9:03:22.31,1146777492,19406651854,9253,338796881,1675352764,13269,148,185, 4.66, 0.00,0.01,3,11,0
07/28/2021,9:03:23.28,1146777492,19406651965,9253,338796883,1675352775,13269,148,185, 6.50, 0.00,0.05,2,11,0
07/28/2021,9:03:24.36,1146777492,19406652165,9253,338796887,1675352786,13269,148,185, 3.75, 0.00,0.09,4,11,0
07/28/2021,9:03:25.33,1146777524,19406652753,9253,338796898,1675352800,13269,148,185, 2.27, 0.01,0.28,11,14,0
07/28/2021,9:03:26.31,1146777524,19406652753,9253,338796898,1675352811,13269,148,185, 999999.99, 0.00,0.00,0,11,0
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
I wasn't commenting about the particular value=3; I was commenting about the tiny amount of writing that appears to happen intermittently while C5=1, when I would expect the bug to cause a reasonably constant high rate of writing (based on my observations of my own ssd).

Gotcha. I don't know how to explain that. It might depend on a number of factors, but I would need the SSD to be in the "right" state again to be sure.

For the last 3+ hours, I logged once per second. Below are several lines of the log, which include a write burst that lasted about 5.5 seconds. (Note: 999999.99 represents infinite WAF, where deltaF7=0). DeltaF8 is the next-to-last column, and C5 is the last column. During the write burst, deltaF8 averaged about 6700 NAND pages per second, totaling 37190 pages. When it wasn't write-bursting, DeltaF8 averaged about 13 pages per second. (I don't know whether 3 of those ~13 pages were triggered by the logger's use of smartctl.)

If your SSD is not a system storage drive and does not contain a pagefile, you can remove the partition letter(s) in Disk Management (if you're using Windows) so that host writes will most likely not occur at all.

By the way, Crucial calls the 197/C5 attribute "Current Pending ECC Count" in Crucial Storage Executive; I did not come up with the name on my own.

zx4cKNk.png


As for logging at one sample every 5 seconds, at the moment the situation is like this. I'm not sure if the burst after 18:45 was due to the issue of this thread (EDIT: but upon checking, about 37600 FTL pages were written in that period):

zJsWtMA.png


EDIT: from the corresponding data, those 37600 FTL pages took about 2:10 minutes to write.

Code:
index	datetime	current_pending_ecc_count	host_program_page_count_delta	ftl_program_page_count_delta	waf_delta
1777	2021-07-28 18:46:08	0	83	33	1.398
1778	2021-07-28 18:46:13	0	31	14	1.452
1779	2021-07-28 18:46:18	0	16	3	1.188
1780	2021-07-28 18:46:24	0	31	14	1.452
1781	2021-07-28 18:46:29	0	23	3	1.130
1782	2021-07-28 18:46:34	0	6	13	3.167
1783	2021-07-28 18:46:39	0	20	3	1.150
1784	2021-07-28 18:46:44	0	30	11	1.367
1785	2021-07-28 18:46:49	0	9	12	2.333
1786	2021-07-28 18:46:54	0	57	10	1.175
1787	2021-07-28 18:46:59	0	32	719	23.469
1788	2021-07-28 18:47:04	0	5	955	192.000
1789	2021-07-28 18:47:09	0	16	963	61.188
1790	2021-07-28 18:47:14	0	49	3131	64.898
1791	2021-07-28 18:47:19	0	20	1529	77.450
1792	2021-07-28 18:47:24	0	17	1985	117.765
1793	2021-07-28 18:47:29	0	11	1251	114.727
1794	2021-07-28 18:47:34	0	13	1662	128.846
1795	2021-07-28 18:47:39	0	2	587	294.500
1796	2021-07-28 18:47:44	0	9	1130	126.556
1797	2021-07-28 18:47:49	0	34	1923	57.559
1798	2021-07-28 18:47:54	0	10	1045	105.500
1799	2021-07-28 18:47:59	0	6	595	100.167
1800	2021-07-28 18:48:04	0	8	874	110.250
1801	2021-07-28 18:48:09	0	23	1209	53.565
1802	2021-07-28 18:48:14	0	7	853	122.857
1803	2021-07-28 18:48:19	0	19	973	52.211
1804	2021-07-28 18:48:24	0	15	1171	79.067
1805	2021-07-28 18:48:30	0	14	884	64.143
1806	2021-07-28 18:48:35	0	11	1171	107.455
1807	2021-07-28 18:48:40	0	30	2207	74.567
1808	2021-07-28 18:48:45	0	1520	2512	2.653
1809	2021-07-28 18:48:50	0	11	1755	160.545
1810	2021-07-28 18:48:55	0	18	2065	115.722
1811	2021-07-28 18:49:00	0	8	1469	184.625
1812	2021-07-28 18:49:05	0	15	2722	182.467
1813	2021-07-28 18:49:10	0	23	253	12.000
1814	2021-07-28 18:49:15	0	17	12	1.706
1815	2021-07-28 18:49:20	0	6	9	2.500
1816	2021-07-28 18:49:25	0	8	10	2.250
1817	2021-07-28 18:49:30	0	24	9	1.375
1818	2021-07-28 18:49:35	0	6	11	2.833
1819	2021-07-28 18:49:40	0	13	9	1.692
1820	2021-07-28 18:49:45	0	128	9	1.070
1821	2021-07-28 18:49:50	0	91	14	1.154
1822	2021-07-28 18:49:55	0	35	18	1.514
 
Last edited:

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
Here are three other "ECC events" (along with what Crucial is implying with the naming given to the corresponding SMART attribute) I had this morning, each taking about 60 seconds to complete and causing about 1 GiB of internal writes (37000 FTL program pages). This is with logging at a rate of 1 sample/5 seconds.

Perhaps when there will be more of these events their speed will increase.

4TaKcUh.png

tMYWI9S.png


Code:
index	datetime	current_pending_ecc_count	ftl_program_page_count_delta
9322	2021-07-29 05:20:18.674943	0	10.0
9323	2021-07-29 05:20:23.720197	0	9.0
9324	2021-07-29 05:20:28.757800	0	11.0
9325	2021-07-29 05:20:33.789992	0	9.0
9326	2021-07-29 05:20:38.822793	0	3.0
9327	2021-07-29 05:20:43.863629	0	9.0
9328	2021-07-29 05:20:48.901142	0	15.0
9329	2021-07-29 05:20:53.933639	0	18.0
9330	2021-07-29 05:20:58.976850	0	3.0
9331	2021-07-29 05:21:04.013313	1	3048.0
9332	2021-07-29 05:21:09.059462	1	2736.0
9333	2021-07-29 05:21:14.106161	1	4069.0
9334	2021-07-29 05:21:19.154728	1	6088.0
9335	2021-07-29 05:21:24.197825	1	1379.0
9336	2021-07-29 05:21:29.230252	1	1347.0
9337	2021-07-29 05:21:34.276133	1	1371.0
9338	2021-07-29 05:21:39.318592	1	2027.0
9339	2021-07-29 05:21:44.432909	1	3684.0
9340	2021-07-29 05:21:49.479238	1	2395.0
9341	2021-07-29 05:21:54.524467	1	1038.0
9342	2021-07-29 05:21:59.568778	1	699.0
9343	2021-07-29 05:22:04.610600	1	1011.0
9344	2021-07-29 05:22:09.657194	1	2179.0
9345	2021-07-29 05:22:14.702403	1	1060.0
9346	2021-07-29 05:22:19.739706	1	2704.0
9347	2021-07-29 05:22:24.783571	0	549.0
9348	2021-07-29 05:22:29.820968	0	9.0
9349	2021-07-29 05:22:34.863895	0	3.0
9350	2021-07-29 05:22:39.897676	0	10.0
9351	2021-07-29 05:22:44.929653	0	9.0
9352	2021-07-29 05:22:49.966075	0	9.0
9353	2021-07-29 05:22:55.008279	0	10.0
9354	2021-07-29 05:23:00.058335	0	9.0
9355	2021-07-29 05:23:05.102490	0	3.0
9356	2021-07-29 05:23:10.150694	0	3.0
9848	2021-07-29 06:04:30.173203	0	11.0
9849	2021-07-29 06:04:35.211869	0	9.0
9850	2021-07-29 06:04:40.248076	0	9.0
9851	2021-07-29 06:04:45.295060	0	10.0
9852	2021-07-29 06:04:50.341941	0	3.0
9853	2021-07-29 06:04:55.386083	0	9.0
9854	2021-07-29 06:05:00.423477	0	10.0
9855	2021-07-29 06:05:05.463413	0	9.0
9856	2021-07-29 06:05:10.504414	0	9.0
9857	2021-07-29 06:05:15.721547	1	2019.0
9858	2021-07-29 06:05:20.771842	1	3056.0
9859	2021-07-29 06:05:25.808706	1	690.0
9860	2021-07-29 06:05:30.852673	1	2716.0
9861	2021-07-29 06:05:35.892506	1	4399.0
9862	2021-07-29 06:05:40.935454	1	1702.0
9863	2021-07-29 06:05:46.037926	1	1700.0
9864	2021-07-29 06:05:51.078292	1	2699.0
9865	2021-07-29 06:05:56.119795	1	1033.0
9866	2021-07-29 06:06:01.162430	1	1699.0
9867	2021-07-29 06:06:06.443340	1	6416.0
9868	2021-07-29 06:06:11.488292	1	6091.0
9869	2021-07-29 06:06:16.525745	0	3037.0
9870	2021-07-29 06:06:21.566963	0	9.0
9871	2021-07-29 06:06:26.605087	0	12.0
9872	2021-07-29 06:06:31.638833	0	9.0
9873	2021-07-29 06:06:36.680787	0	9.0
9874	2021-07-29 06:06:41.721689	0	11.0
9875	2021-07-29 06:06:46.760465	0	9.0
9876	2021-07-29 06:06:51.801929	0	9.0
9877	2021-07-29 06:06:56.843108	0	10.0
9878	2021-07-29 06:07:01.891540	0	9.0
10374	2021-07-29 06:48:42.455427	0	13.0
10375	2021-07-29 06:48:47.494758	0	11.0
10376	2021-07-29 06:48:52.542967	0	13.0
10377	2021-07-29 06:48:57.589558	0	10.0
10378	2021-07-29 06:49:02.640418	0	10.0
10379	2021-07-29 06:49:07.683762	0	9.0
10380	2021-07-29 06:49:12.723522	0	11.0
10381	2021-07-29 06:49:17.768311	0	9.0
10382	2021-07-29 06:49:22.805127	0	9.0
10383	2021-07-29 06:49:27.845880	1	1695.0
10384	2021-07-29 06:49:32.888948	1	2373.0
10385	2021-07-29 06:49:37.930995	1	1371.0
10386	2021-07-29 06:49:42.975365	1	2038.0
10387	2021-07-29 06:49:48.014407	1	1708.0
10388	2021-07-29 06:49:53.046129	1	2355.0
10389	2021-07-29 06:49:58.097670	1	3062.0
10390	2021-07-29 06:50:03.142722	1	691.0
10391	2021-07-29 06:50:08.186289	1	1708.0
10392	2021-07-29 06:50:13.236390	1	1699.0
10393	2021-07-29 06:50:18.356709	1	3047.0
10394	2021-07-29 06:50:23.405962	1	2375.0
10395	2021-07-29 06:50:28.452447	1	3391.0
10396	2021-07-29 06:50:33.485415	1	1369.0
10397	2021-07-29 06:50:38.517519	1	3057.0
10398	2021-07-29 06:50:43.549746	1	4050.0
10399	2021-07-29 06:50:48.583866	0	1318.0
10400	2021-07-29 06:50:53.620672	0	3.0
10401	2021-07-29 06:50:58.665097	0	12.0
10402	2021-07-29 06:51:03.703553	0	12.0
10403	2021-07-29 06:51:08.743690	0	10.0
10404	2021-07-29 06:51:13.783770	0	13.0
10405	2021-07-29 06:51:18.819859	0	10.0
10406	2021-07-29 06:51:23.870072	0	11.0
10407	2021-07-29 06:51:28.901860	0	9.0
10408	2021-07-29 06:51:33.940063	0	9.0
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
-snip-
Here are three other "ECC events" (along with what Crucial is implying with the naming given to the corresponding SMART attribute) I had this morning, each taking about 60 seconds to complete and causing about 1 GiB of internal writes (37000 FTL program pages). This is with logging at a rate of 1 sample/5 seconds.
-snip-
Code:
index    datetime    current_pending_ecc_count    ftl_program_page_count_delta
-snip-
9328    2021-07-29 05:20:48.901142    0    15.0
9329    2021-07-29 05:20:53.933639    0    18.0
9330    2021-07-29 05:20:58.976850    0    3.0
9331    2021-07-29 05:21:04.013313    1    3048.0
9332    2021-07-29 05:21:09.059462    1    2736.0
9333    2021-07-29 05:21:14.106161    1    4069.0
9334    2021-07-29 05:21:19.154728    1    6088.0
9335    2021-07-29 05:21:24.197825    1    1379.0
9336    2021-07-29 05:21:29.230252    1    1347.0
9337    2021-07-29 05:21:34.276133    1    1371.0
9338    2021-07-29 05:21:39.318592    1    2027.0
9339    2021-07-29 05:21:44.432909    1    3684.0
9340    2021-07-29 05:21:49.479238    1    2395.0
9341    2021-07-29 05:21:54.524467    1    1038.0
9342    2021-07-29 05:21:59.568778    1    699.0
9343    2021-07-29 05:22:04.610600    1    1011.0
9344    2021-07-29 05:22:09.657194    1    2179.0
9345    2021-07-29 05:22:14.702403    1    1060.0
9346    2021-07-29 05:22:19.739706    1    2704.0
9347    2021-07-29 05:22:24.783571    0    549.0
9348    2021-07-29 05:22:29.820968    0    9.0
9349    2021-07-29 05:22:34.863895    0    3.0
9350    2021-07-29 05:22:39.897676    0    10.0
-snip-

The fastest speed logged during your three F8 bursts is 6416 NAND pages per 5 seconds. (And the average speed while C5=1 is much less: 2433 per 5s.) 6416 per 5 seconds is only about 1/6th of the fastest F8 speed on my ssd, 7756/second in my previous post. 6416/5s is also only about 1/4th of my ssd's average C5=1 F8 speed, and your 2433/5s average C5=1 speed is only about 1/10th my ssd's average C5=1 speed.

It would be nice to understand why your ssd's C5=1 F8 speed is so much slower than mine. Could you also enable simultaneous logging of host pc reads & writes to/from the ssd -- perhaps using a logging-capable drive monitor app such as HWiNFO or HDSentinel -- so that you could check whether (higher priority) host activity is greatly slowing down the C5=1 F8 speed? (Perhaps a high sample rate log of deltaF7 would be a reasonable way to log the host writes, but perhaps not... I presume writing to NAND in SLC mode is a high priority task that would delay F8 writing, but does F7 include writes to NAND in SLC mode or does it only count writes to NAND in TLC mode?)

[UPDATED ABOUT AN HOUR LATER: Instead of suggesting you simultaneously log host pc ssd activity, I should have made a much simpler suggestion: Just tell us how much reading and writing your pc typically does over a period of about a day. Only if there's a large amount of activity, compared to my pc, would it then make sense to drill deeper with simultaneous logging. (I'll check your earlier posts to see if you already posted any measures of host activity.) SECOND UPDATE A FEW MINUTES AFTER THE FIRST: I reviewed your earlier posts, and saw none that indicate how much ssd reading your pc is doing. One of the graphs indicates -- if I'm correctly interpreting it -- your pc is writing a little over 100,000 pages per minute on average, which is about 3 GB/minute, which is 3 orders of magnitude more than my pc writes: 2 MB/minute. (For completeness, my pc is also reading about 9 MB/minute.) Can you confirm your pc is writing about 3GB/minute? That seems incredibly high, but perhaps it will settle down after the recent Windows 10 installation decides it's finished updating itself. End of updates.]

Now that I'm thinking about it, it occurs to me that F8 might include SLC mode writes to NAND. I don't recall seeing any documentation about precisely what gets counted in F7 and F8, and I shouldn't try to guess whether Crucial counts SLC mode writes, and if so, whether Crucial counts them in F7 or F8.

The slowest speed that you logged while C5=1 was 690 NAND pages per 5 seconds. That's much faster than the slowest C5=1 speed when you sampled once per second. (690/5 >> 3.) I guess this could be explained by assuming it will be rare for deltaF8 (while C5=1) to be tiny for 5 consecutive seconds.
 
Last edited:

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
@Lucretia19
Usually my average is about 15-25 GB written/day; sometimes much more depending on activity which can include virtual machines, handling large media and 3D files, Steam game updates, etc.

The bottommost graph here, made using data from HWInfo logged every second, is representative of the typical background read-write rate when I'm not actively using the PC:

SwgD2KJ.png


Not many reads occurring; a more or less constant write background activity due to (presumably) mail and rss clients, opened browsers and other OS activity.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
@Lucretia19
Usually my average is about 15-25 GB written/day; sometimes much more depending on activity which can include virtual machines, handling large media and 3D files, Steam game updates, etc.

The bottommost graph here, made using data from HWInfo logged every second, is representative of the typical background read-write rate when I'm not actively using the PC:
<-image snipped->
Not many reads occurring; a more or less constant write background activity due to (presumably) mail and rss clients, opened browsers and other OS activity.

I guess you saw my update edits above... good.

I'll assume the moderate level of read+write activity shown in your HWiNFO graph is comparable to the activity that took place during your earlier csv data that showed "slow" F8 write bursts each lasting a minute or more. (I can't tell much from the plots of deltaF7 and deltaF8. Too many dots jumping up and down, and the C5=1 dots, if any exist, are too rare and color-blended for my eyes to see. Perhaps a curve showing the deltaF8 rolling average would more clearly reveal the durations of the write bursts... rolling average is a kind of low-pass filter. Or perhaps your future plots relevant to this mystery, if you produce more, could suppress the points where C5=0, or suppress the points where deltaF8 is small, so they'd show only the points of interest. My other request would be to plot the deltaF8 data explicitly instead of expecting our brains' optical processing centers to subtract deltaF7 from the deltaF7+deltaF8 data, since the deltaF8 data is what matters at the moment.)

Your HWiNFO graph appears to show an average of about 0.1 MB/s of host read+write activity. That's smaller than my pc's 0.036 MB/s write rate + 0.146 MB/s read rate (which are HWiNFO numbers too).

(Unfortunately I don't know whether HWiNFO counts the bytes read from the pc's drive cache in system ram as if they're bytes read again from the drive -- I see no setting in HWiNFO to count or not count reads from system cache -- so it's possible that my true ssd read rate is actually much smaller than 0.146 MB/s. Also, I don't know whether reading from the ssd's internal cache affects the ssd's NAND writing; reading from ssd cache and FTL writing to NAND ought to operate in parallel in a well-designed drive... but this is Crucial.)

My hunch is that your 0.1 MB/s pc activity is small enough that it does NOT explain why your F8 write bursts are spread out to a minute or more. Still a mystery.

Regarding why your pc writes more than reads, while mine reads more than writes... I will guess that my pc writes less than yours writes because I redirected many Windows logs and some app data caches to a hard drive. But I have no idea why my pc reads so much more than yours reads. (I guess it's possible we're comparing apples and oranges. The HWiNFO stats I'm reporting are the rows that HWiNFO originally labeled "Read Rate" and "Write Rate" in one of the two sections for the ssd, and I would bet yours are too... but it's good to verify.)
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
I'll assume the moderate level of read+write activity shown in your HWiNFO graph is comparable to the activity that took place during your earlier csv data that showed "slow" F8 write bursts each lasting a minute or more.

I think it should be about the same.

(I can't tell much from the plots of deltaF7 and deltaF8. Too many dots jumping up and down, and the C5=1 dots, if any exist, are too rare and color-blended for my eyes to see. Perhaps a curve showing the deltaF8 rolling average would more clearly reveal the durations of the write bursts... rolling average is a kind of low-pass filter. Or perhaps your future plots relevant to this mystery, if you produce more, could suppress the points where C5=0, or suppress the points where deltaF8 is small, so they'd show only the points of interest. My other request would be to plot the deltaF8 data explicitly instead of expecting our brains' optical processing centers to subtract deltaF7 from the deltaF7+deltaF8 data, since the deltaF8 data is what matters at the moment.)

C5 dots are plotted on top of everything else, so if they are not visible, it means that none were recorded. I could also suppress points with C5=0, but at the moment they're the majority, and C5=1 ones stand out clearly from them. It looks like the FTL write bursts are not always associated with them, at least presently.

I can add a deltaF8 rolling average and/or just the F8 points.

Here are 3 graphs with some such modifications. The bursty read activity was due to a 3D game.

BgTFt7J.png

Jgk6kyq.png

c9pydy1.png



(Unfortunately I don't know whether HWiNFO counts the bytes read from the pc's drive cache in system ram as if they're bytes read again from the drive -- I see no setting in HWiNFO to count or not count reads from system cache -- so it's possible that my true ssd read rate is actually much smaller than 0.146 MB/s. Also, I don't know whether reading from the ssd's internal cache affects the ssd's NAND writing; reading from ssd cache and FTL writing to NAND ought to operate in parallel in a well-designed drive... but this is Crucial.)

I think they are actual bytes transferred to/from the drive. They should be the same values observed in Windows Task Manager under 'disk' properties, or from related performance counters in "Performance Monitor" under "Logical Disk" or "Physical Disk".

Regarding why your pc writes more than reads, while mine reads more than writes... I will guess that my pc writes less than yours writes because I redirected many Windows logs and some app data caches to a hard drive. But I have no idea why my pc reads so much more than yours reads. (I guess it's possible we're comparing apples and oranges. The HWiNFO stats I'm reporting are the rows that HWiNFO originally labeled "Read Rate" and "Write Rate" in one of the two sections for the ssd, and I would bet yours are too... but it's good to verify.)

I have not redirected any write elsewhere and I am not making any effort trying to limit writes to the SSD. However, to avoid affecting the results ("observer effect"), I am logging SMART and activity data onto a different drive.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
@Lucretia19
Here is another "C5" event in the morning while I was not using my PC. This time it was much faster and probably more similar to what you have been reporting. Drive activity was low but not significantly different than usual. As I don't think that my read/write activity is affecting this to any serious extent, I might be removing read/write rate logging with HWInfo.

GzEUp7W.png


Code:
index	datetime	current_pending_ecc_count	ftl_program_page_count_delta
26613	2021-07-30 05:59:29.194078	0	3.0
26614	2021-07-30 05:59:34.227734	0	11.0
26615	2021-07-30 05:59:39.264113	0	9.0
26616	2021-07-30 05:59:44.309867	0	3.0
26617	2021-07-30 05:59:49.352767	0	10.0
26618	2021-07-30 05:59:54.393409	0	9.0
26619	2021-07-30 05:59:59.437704	0	9.0
26620	2021-07-30 06:00:04.483593	0	11.0
26621	2021-07-30 06:00:09.527462	0	11.0
26622	2021-07-30 06:00:14.682700	1	23669.0
26623	2021-07-30 06:00:19.733325	0	13478.0
26624	2021-07-30 06:00:24.767278	0	10.0
26625	2021-07-30 06:00:29.813492	0	3.0
26626	2021-07-30 06:00:34.852073	0	9.0
26627	2021-07-30 06:00:39.891036	0	16.0
26628	2021-07-30 06:00:44.933597	0	9.0
26629	2021-07-30 06:00:49.970292	0	9.0
26630	2021-07-30 06:00:55.011198	0	11.0
26631	2021-07-30 06:01:00.052741	0	3.0
26632	2021-07-30 06:01:05.085171	0	10.0

Overall, I think the characteristics of these spikes will depend on the internal state of the SSD. I think the issue will worsen with time as more "stale" static data exists, although there should be some other trigger like [lack of] power cycling and so on. Right now my data on this SSD is only a few days old at most.

I also suspect that this process and other oddities (like slowly-advancing power on hours) is initiated as soon as host writes start occurring to some extent, which might explain why the drive seemed to behave properly when I had it connected to the system (using Windows) with no mounted partition. However, at the moment I cannot test whether this is true.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
-snip-
C5 dots are plotted on top of everything else, so if they are not visible, it means that none were recorded. I could also suppress points with C5=0, but at the moment they're the majority, and C5=1 ones stand out clearly from them. It looks like the FTL write bursts are not always associated with them, at least presently.

Dots can be too hard to see, depending on how they're plotted and what other data is plotted in the same area. When I try to see C5 in your plot, I see only a solid bar at the height labeled "1" on the vertical axis. I don't know how to interpret that because it suggests C5 is always (or almost always) 1 yet we know C5 is usually 0.

I can add a deltaF8 rolling average and/or just the F8 points.

Here are 3 graphs with some such modifications. The bursty read activity was due to a 3D game.

It looks like the blue deltaF7 ("host writes") data has been mislabeled as deltaF8 ("FTL").

In the future, could you swap colors so that the F8 data is a dark (high contrast) color? F8 is the data of most interest, and the pale orange (low contrast) strains my eyes where the dots are sparse.

I should have been more specific about the rolling average. I had in mind averaging a few samples (maybe 3 to 5 samples) at the high sample rate of one per second. I think that would have smoothed out the high frequency (one second) variations in F8 that your earlier one-sample-per-second plots showed. The longer sample period that you chose, 5 seconds, has an effect similar to a rolling average for the F8 data without having to combine it with a rolling average function.

I think they are actual bytes transferred to/from the drive. They should be the same values observed in Windows Task Manager under 'disk' properties, or from related performance counters in "Performance Monitor" under "Logical Disk" or "Physical Disk".
-snip-

Has it been established that those other measures count only bytes actually transferred to/from the drive, and don't count bytes read from cache in system memory?

I assume none of those measures distinguish "bytes read from the drive's fast internal cache" from "bytes read from the drive's slower mass storage."

Here is another "C5" event in the morning while I was not using my PC. This time it was much faster and probably more similar to what you have been reporting. Drive activity was low but not significantly different than usual. As I don't think that my read/write activity is affecting this to any serious extent, I might be removing read/write rate logging with HWInfo.

[csv data]

Overall, I think the characteristics of these spikes will depend on the internal state of the SSD. I think the issue will worsen with time as more "stale" static data exists, although there should be some other trigger like [lack of] power cycling and so on. Right now my data on this SSD is only a few days old at most.

I also suspect that this process and other oddities (like slowly-advancing power on hours) is initiated as soon as host writes start occurring to some extent, which might explain why the drive seemed to behave properly when I had it connected to the system (using Windows) with no mounted partition. However, at the moment I cannot test whether this is true.

Your csv data is consistent with my conclusion that C5=1 while and only while a write burst is occurring. The write burst wrote 37000-ish pages and appears to have lasted around 5 seconds. It began at around 6:00:11, and 23000 pages had been written by the time of the next sampling at 6:00:14. It appears the burst ended around 6:00:16. The sampling of C5 shows it was 0 at 6:00:09 before the burst began, was 1 at 6:00:14 during the burst, and was 0 at 6:00:19 after the burst had ended. Do you have any csv data that refutes "C5=1 while and only while a write burst is occurring?"

I think you're probably right that the bug manifests after host writes. Host writing will eventually cause the ssd's wear-leveling algorithm to trigger. It would also eventually trigger the algorithm that copies from SLC NAND to TLC NAND, if the ssd had been forced to write data in SLC mode. I don't know how much host writing is needed to trigger the algorithms, or how long the delay is before the algorithms start to run.

I don't think there's any unexpected relation between host writing and the slow rate that Power On Hours advances. POH doesn't advance while the ssd is in low power state, and I think that's an "oddity" only in the sense that other drive manufacturers choose to count "time spent in low power state" as time powered on. An oddity, maybe, but not a mystery.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
Dots can be too hard to see, depending on how they're plotted and what other data is plotted in the same area. When I try to see C5 in your plot, I see only a solid bar at the height labeled "1" on the vertical axis. I don't know how to interpret that because it suggests C5 is always (or almost always) 1 yet we know C5 is usually 0.
Now I see the issue. The C5 points are aligned to the secondary (right) Y axis labeled "Pending ECC Count", not the primary one (left).

It looks like the blue deltaF7 ("host writes") data has been mislabeled as deltaF8 ("FTL").

In the future, could you swap colors so that the F8 data is a dark (high contrast) color? F8 is the data of most interest, and the pale orange (low contrast) strains my eyes where the dots are sparse.
There's no mislabeling; the blue line there is indeed the average of the delta of the FTL program page count. Colors can be changed.

I should have been more specific about the rolling average. I had in mind averaging a few samples (maybe 3 to 5 samples) at the high sample rate of one per second. I think that would have smoothed out the high frequency (one second) variations in F8 that your earlier one-sample-per-second plots showed. The longer sample period that you chose, 5 seconds, has an effect similar to a rolling average for the F8 data without having to combine it with a rolling average function.
Then the rolling average can be removed with the 5-seconds data.

Here is a new graph from the past several hours with some of the above changes.

yV1BHEA.png


Has it been established that those other measures count only bytes actually transferred to/from the drive, and don't count bytes read from cache in system memory?

I assume none of those measures distinguish "bytes read from the drive's fast internal cache" from "bytes read from the drive's slower mass storage."
I didn't see speeds in the order of GB/s, so I think that's to/from the actual drive and not system memory.

Your csv data is consistent with my conclusion that C5=1 while and only while a write burst is occurring. The write burst wrote 37000-ish pages and appears to have lasted around 5 seconds. It began at around 6:00:11, and 23000 pages had been written by the time of the next sampling at 6:00:14. It appears the burst ended around 6:00:16. The sampling of C5 shows it was 0 at 6:00:09 before the burst began, was 1 at 6:00:14 during the burst, and was 0 at 6:00:19 after the burst had ended. Do you have any csv data that refutes "C5=1 while and only while a write burst is occurring?"
So far I've only seen C5=1 associated with the FTL write bursts. When the internal conditions of the SSD will (supposedly) worsen perhaps this will be clearer. I will keep logging at the current 5 seconds/sample rate which seems a good compromise which doesn't affect measurements too much.

I think you're probably right that the bug manifests after host writes. Host writing will eventually cause the ssd's wear-leveling algorithm to trigger. It would also eventually trigger the algorithm that copies from SLC NAND to TLC NAND, if the ssd had been forced to write data in SLC mode. I don't know how much host writing is needed to trigger the algorithms, or how long the delay is before the algorithms start to run.
I don't know either. However, Anandtech reported in their Crucial MX500 review that:

[...] As usual for Crucial, the SLC write cache is dynamically sized based on how full the drive is [...]

So, possibly keeping the drive full and/or with untrimmed space would reduce or even remove the effect of SLC cache flushing.

I don't think there's any unexpected relation between host writing and the slow rate that Power On Hours advances. POH doesn't advance while the ssd is in low power state, and I think that's an "oddity" only in the sense that other drive manufacturers choose to count "time spent in low power state" as time powered on. An oddity, maybe, but not a mystery.
I can't figure out why the SSD would go in a low power state when actually used as a system drive (Windows installation, etc), but wouldn't when (mostly) connected to SATA power with zero host writes. That seems a very strange behavior.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Now I see the issue. The C5 points are aligned to the secondary (right) Y axis labeled "Pending ECC Count", not the primary one (left).

Yes, my mistake for not seeing the label on the axis on the right.

There's no mislabeling; the blue line there is indeed the average of the delta of the FTL program page count. Colors can be changed.

My mistake for not realizing the label occupied 2 lines. Its first line duplicated the label above.

Here is a new graph from the past several hours with some of the above changes.
-snip-

As I briefly mentioned earlier, since each burst (on my ssd) appears to last a multiple of about 5 seconds, a sample rate faster than 5 seconds is needed to observe some key details. In particular, you wouldn't detect C5=1 if it changes to 1 a moment after a sampling and changes back to 0 a moment before the next sampling.

This morning I reviewed about 3 months of my Burst Details log. The logger observes C5 once per second during the pauses between selftests, and logs when C5=1. I found that a small fraction of the bursts have C5=1 for only 4 consecutive samples. Two have C5=1 for only 3 consecutive samples. (Most are 5 samples.)

So I believe sampling C5 and F8 every 3 seconds -- for a few days -- should be pretty effective at confirming or refuting "C5=1 while and only while a write burst is occurring."

I don't see a reason why you'd need to continue logging at high speed after a few days of studying your ssd's behavior. Except for my Burst Details logger, which samples every second for about 30 seconds once every 20 minutes, my other loggers sample no more frequently than once every 20 minutes. The only data I still copy to a spreadsheet for analysis is my once-per-day log (plus an occasional row of my once-every-2-hours log, when Average Block Erase Count increments once every few weeks).

It's possible that the rare 3-seconds bursts aren't bugs. If the selftests weren't running there might not be enough 3-seconds bursts to have a big impact on the ssd's lifetime, and they might just be desirable wear-leveling. I have an urge to modify my Burst Details logger so it would also log deltaF8, and would also log the sample immediately before and the sample immediately after the C5=1 samples. This would help me see what's going on during the rare cases where C5=1 for only 3 consecutive samples. I'd be able to see which of the following is happening: either it writes less than 37000 pages, or it writes 37000 pages faster than typical bursts.

I didn't see speeds in the order of GB/s, so I think that's to/from the actual drive and not system memory.

Good observation, assuming you're referring to a log of speeds, and not just occasional glances at a monitoring app.

So far I've only seen C5=1 associated with the FTL write bursts. When the internal conditions of the SSD will (supposedly) worsen perhaps this will be clearer. I will keep logging at the current 5 seconds/sample rate which seems a good compromise which doesn't affect measurements too much.

The association you mention is the "only while" half of my "C5=1 while and only while a write burst is occurring" conclusion.

I think the current conditions should suffice to make the relationship clear, given a sample rate that's fast enough.

I don't know either. However, Anandtech reported in their Crucial MX500 review that: "As usual for Crucial, the SLC write cache is dynamically sized based on how full the drive is." So, possibly keeping the drive full and/or with untrimmed space would reduce or even remove the effect of SLC cache flushing.

Keeping the drive full sounds like a way to eliminate ALL write problems, except the obvious one that nothing more can be written.

Why do you think TRIM could be relevant? My understanding is that the TRIM command is used by the OS to tell the drive that some stored data should be considered deleted, so that those areas become available for erasing / wear leveling / etc whenever that's convenient for the ssd (which the ssd can do using a low priority background routine, to improve performance and lifetime). Assuming I'm right that the write bursts are actually reading & writing (copying or moving) data, it suggests the bursts are triggered by something that appears wrong about the cells from which the data is read, not something that appears wrong about the cells to which the data is written. By definition of TRIM, it shouldn't be set for cells from which data is read, because there's no need to copy or move data marked trimmed, so I think TRIM could not explain what appears wrong about the cells from which data is read.

I can't figure out why the SSD would go in a low power state when actually used as a system drive (Windows installation, etc), but wouldn't when (mostly) connected to SATA power with zero host writes. That seems a very strange behavior.

Yes, "POH=realtime when there are zero host writes" (and zero host reads?) is strange. (Slow POH in normal use, on the other hand, has been explained.) When you say the ssd was connected to SATA power, do you also mean it was NOT connected to SATA data? If the data cable wasn't plugged in, perhaps the absence of a signal is a condition that Crucial didn't anticipate or test, so its power management behaves unexpectedly. Or maybe the ssd considers the absence of a data signal a sign of an error condition, and keeps running a selftest in response, which would cause POH=realtime. This could probably be verified, assuming the ssd keeps a log of its most recent selftest that can be inspected after the data cable is plugged in.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
[...] Good observation, assuming you're referring to a log of speeds, and not just occasional glances at a monitoring app.

Log and also in real-time from HWInfo64.

RwblylJ.png


The association you mention is the "only while" half of my "C5=1 while and only while a write burst is occurring" conclusion.

I think the current conditions should suffice to make the relationship clear, given a sample rate that's fast enough.

I cannot rule out that C5=1 will occur in other situations as well, but I've just not observed them so far.


Keeping the drive full sounds like a way to eliminate ALL write problems, except the obvious one that nothing more can be written.

Why do you think TRIM could be relevant? [...]
Because I've observed differences with/without it in 2019, and in the past several hours I've been testing this again. I disabled TRIM with "fsutil behavior set DisableDeleteNotify 1" in a terminal window with administrative privileges, filled all the free space with random dummy data and deleted the data. The filling was made in order to dirty up all free space previously marked as trimmed.

An immediate effect of disabling TRIM commands is that the short-term write amplification drops to very low levels (about 1.005), which makes me think that normal static wear leveling gets disabled.

I now get many less 37000-pages FTL write spikes (not always associated with C5=1, as before), but they still occasionally occur. An interesting side-effect is that power-on hours now advance faster, but still not as quick as real time. The background FTL writes also have a different pattern.

B4w8E1A.png



Yes, "POH=realtime when there are zero host writes" (and zero host reads?) is strange. (Slow POH in normal use, on the other hand, has been explained.) When you say the ssd was connected to SATA power, do you also mean it was NOT connected to SATA data? If the data cable wasn't plugged in, perhaps the absence of a signal is a condition that Crucial didn't anticipate or test, so its power management behaves unexpectedly. Or maybe the ssd considers the absence of a data signal a sign of an error condition, and keeps running a selftest in response, which would cause POH=realtime. This could probably be verified, assuming the ssd keeps a log of its most recent selftest that can be inspected after the data cable is plugged in.

I did have the SSD connected to both SATA data and SATA power cables, but as the SSD previously contained only Linux partitions that Windows couldn't read nor mount, no writes and no reads could be performed on them (at least by ordinary means).
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Log and also in real-time from HWInfo64.
-HWiNFO screenshot deleted-

The HWiNFO screenshot of the max read rate is convincing that the stat doesn't include reads from cache in the pc's system ram. But I can't say the same about reads from a cache internal to the ssd, which would be bottlenecked by the speed of the SATA channel. And I don't believe HWiNFO is capable of distinguishing between "ssd reads that come from the ssd's internal cache" and "ssd reads that come from ssd NAND."

Interesting that your ssd max read rate is so much faster than mine, 560 MB/s versus 164 MB/s. Your max write rate is much faster too: 449 MB/s versus 36 MB/s. Your max speeds are similar to the speeds I observed when I ran CrystalDiskMark (while a selftest was running) over a year ago. I wonder whether my slower max speeds now are significant, or just a result of other pc settings and load.

-snip-
Because I've observed differences with/without it in 2019, and in the past several hours I've been testing this again. I disabled TRIM with "fsutil behavior set DisableDeleteNotify 1" in a terminal window with administrative privileges, filled all the free space with random dummy data and deleted the data. The filling was made in order to dirty up all free space previously marked as trimmed.

An immediate effect of disabling TRIM commands is that the short-term write amplification drops to very low levels (about 1.005), which makes me think that normal static wear leveling gets disabled.

My interpretation is a little different. Writing to all the free space, and then not marking it trimmed when it's deleted, causes the ssd to believe the entire drive is filled with valid data. The ssd can now write only when the pc commands it to overwrite. The wear-leveling algorithm has no free space where it may work, so deltaF8 should be expected to be near-zero. But I wouldn't call that "disabling" of wear-leveling, and I see no reason to believe the condition that triggers the bug is eliminated.

I now get many less 37000-pages FTL write spikes (not always associated with C5=1, as before), but they still occasionally occur. An interesting side-effect is that power-on hours now advance faster, but still not as quick as real time. The background FTL writes also have a different pattern.

Are you sure the write bursts are still 37000 pages? I have trouble trying to measure them by eyeballing the graph. <Insert endorsement of csv here.>

With a 5-seconds sampling interval, I don't think any valid conclusions about C5=1 can be reached, for the reason I explained in my previous post. Also, I would trust raw csv data, but am reluctant to trust an image produced by graphical software, particularly without knowing how its algorithm handles data points that are much skinnier than its x-axis unit... perhaps it would omit or distort some of them.

Your POH appears to now be advancing about 2 hours for each 6 realtime hours. That 1:3 ratio is a little faster than my ssd prior to the selftests regime: it was about 1:4 during its first 5.5 months of service.

Based on the temperature spike at approximately 16:00pm on 7/30, I presume that's when you filled the free space.

I would tentatively assume the new F8 pattern is a temporary effect. The large amount of fast F7 writing to fill the free space might require a lot of SLC-to-TLC conversion, in addition to triggering some wear-leveling. How much free space was there before the filling?

Differential WAF is always meaningless on short time scales, due to the large delays between host writes and the eventual FTL writes that the host writes trigger much later.

I did have the SSD connected to both SATA data and SATA power cables, but as the SSD previously contained only Linux partitions that Windows couldn't read nor mount, no writes and no reads could be performed on them (at least by ordinary means).

Well, learning that the SATA data cable was connected shot down a fine theory.

We can't be certain that Windows mostly ignores an ssd that has no Windows partitions mounted, since the hardware doesn't prevent Windows from reading (or writing) the drive's partition table whenever Windows wants to. But I wouldn't expect Windows to keep re-reading the partition table at a rate that would prevent the ssd from spending most of its time in low power mode. Were there any third party drive monitoring apps running, or drivers, that might foolishly keep polling an unmounted drive?

Although it's odd behavior, I don't think it's important to understand it, since I can't think of a reason why anyone would need or want to keep an unmounted drive attached and powered for a long time.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
The HWiNFO screenshot of the max read rate is convincing that the stat doesn't include reads from cache in the pc's system ram. But I can't say the same about reads from a cache internal to the ssd, which would be bottlenecked by the speed of the SATA channel. And I don't believe HWiNFO is capable of distinguishing between "ssd reads that come from the ssd's internal cache" and "ssd reads that come from ssd NAND."

Interesting that your ssd max read rate is so much faster than mine, 560 MB/s versus 164 MB/s. Your max write rate is much faster too: 449 MB/s versus 36 MB/s. Your max speeds are similar to the speeds I observed when I ran CrystalDiskMark (while a selftest was running) over a year ago. I wonder whether my slower max speeds now are significant, or just a result of other pc settings and load.

I don't think any software can measure the speed from the SSD's internal cache to the NAND. The SSD in general should be seen as a black box by the operating system.

HWInfo is showing max speeds similar to CrystalDiskMark because I was indeed using CrystalDiskMark. The values are close to specifications, although I don't remember ever seeing write speeds close to 500 MB/s. https://www.crucial.com/content/dam.../crucial-mx500-ssd-productflyer-letter-en.pdf

When using different benchmarking applications which allow to not bypass the system/OS cache, speeds in the order of a few GB/s can be easily observed through them. I think HWInfo gets its values from internal Windows performance counters which are measuring speeds directly to/from the hardware, though.

My interpretation is a little different. Writing to all the free space, and then not marking it trimmed when it's deleted, causes the ssd to believe the entire drive is filled with valid data. The ssd can now write only when the pc commands it to overwrite. The wear-leveling algorithm has no free space where it may work, so deltaF8 should be expected to be near-zero. But I wouldn't call that "disabling" of wear-leveling, and I see no reason to believe the condition that triggers the bug is eliminated.

I was implying that by doing what I did (although just stop sending TRIM commands immediately triggers a change), the SSD's wear leveling algorithms likely changed to a much less aggressive behavior where shuffling data around is only done when strictly necessary, not that wear leveling is entirely disabled.

The excessive wear observed in this thread could possibly be mitigated if this was also associated with overly aggressive algorithms originally intended by the manufacturer to keep the SSD always fast and responsive. In the past I observed a reduction of that behavior by disabling TRIM, but it was still not eliminated.

Are you sure the write bursts are still 37000 pages? I have trouble trying to measure them by eyeballing the graph. <Insert endorsement of csv here.>

They are still about 37000 pages. If you prefer csv data:

Code:
index	datetime	current_pending_ecc_count	ftl_program_page_count_delta
46464	2021-07-31 09:55:33.659625	0	3.0
46465	2021-07-31 09:55:38.699179	0	3.0
46466	2021-07-31 09:55:43.748569	0	3.0
46467	2021-07-31 09:55:48.791961	0	3.0
46468	2021-07-31 09:55:53.832507	0	3.0
46469	2021-07-31 09:55:58.867746	0	3.0
46470	2021-07-31 09:56:03.913877	0	3.0
46471	2021-07-31 09:56:08.959375	0	3.0
46472	2021-07-31 09:56:14.002185	0	3.0
46473	2021-07-31 09:56:19.044614	1	10491.0
46474	2021-07-31 09:56:24.085816	0	26600.0
46475	2021-07-31 09:56:29.177205	0	11.0
46476	2021-07-31 09:56:34.223919	0	3.0
46477	2021-07-31 09:56:39.251216	0	3.0
46478	2021-07-31 09:56:44.287192	0	3.0
46479	2021-07-31 09:56:49.325756	0	3.0
46480	2021-07-31 09:56:54.353025	0	10.0
46481	2021-07-31 09:56:59.390885	0	3.0
46482	2021-07-31 09:57:04.428862	0	3.0
46483	2021-07-31 09:57:09.465638	0	3.0
48164	2021-07-31 12:25:15.676940	0	3.0
48165	2021-07-31 12:25:20.715241	0	3.0
48166	2021-07-31 12:25:25.754034	0	3.0
48167	2021-07-31 12:25:30.795506	0	3.0
48168	2021-07-31 12:25:35.845605	0	3.0
48169	2021-07-31 12:25:40.892361	0	3.0
48170	2021-07-31 12:25:45.935677	0	7.0
48171	2021-07-31 12:25:50.978291	0	3.0
48172	2021-07-31 12:25:56.017182	0	3.0
48173	2021-07-31 12:26:01.073939	1	8739.0
48174	2021-07-31 12:26:06.113414	0	28457.0
48175	2021-07-31 12:26:11.153230	0	3.0
48176	2021-07-31 12:26:16.192037	0	3.0
48177	2021-07-31 12:26:21.227730	0	3.0
48178	2021-07-31 12:26:26.254204	0	3.0
48179	2021-07-31 12:26:31.290857	0	3.0
48180	2021-07-31 12:26:36.341709	0	3.0
48181	2021-07-31 12:26:41.365228	0	3.0
48182	2021-07-31 12:26:46.388324	0	3.0
48183	2021-07-31 12:26:51.423303	0	3.0
49371	2021-07-31 14:06:42.803107	0	3.0
49372	2021-07-31 14:06:47.839001	0	3.0
49373	2021-07-31 14:06:52.882228	0	3.0
49374	2021-07-31 14:06:57.931943	0	3.0
49375	2021-07-31 14:07:02.976690	0	3.0
49376	2021-07-31 14:07:08.015250	0	3.0
49377	2021-07-31 14:07:13.057501	0	7.0
49378	2021-07-31 14:07:18.099481	0	3.0
49379	2021-07-31 14:07:23.140963	0	3.0
49380	2021-07-31 14:07:28.200603	1	17352.0
49381	2021-07-31 14:07:33.247455	0	19952.0
49382	2021-07-31 14:07:38.285425	0	3.0
49383	2021-07-31 14:07:43.329233	0	3.0
49384	2021-07-31 14:07:48.371501	0	3.0
49385	2021-07-31 14:07:53.415585	0	3.0
49386	2021-07-31 14:07:58.461824	0	3.0
49387	2021-07-31 14:08:03.506877	0	3.0
49388	2021-07-31 14:08:08.547535	0	3.0
49389	2021-07-31 14:08:13.584581	0	3.0
49390	2021-07-31 14:08:18.624162	0	3.0
54064	2021-07-31 20:41:18.724144	0	3.0
54065	2021-07-31 20:41:23.760043	0	3.0
54066	2021-07-31 20:41:28.808343	0	3.0
54067	2021-07-31 20:41:33.852681	0	3.0
54068	2021-07-31 20:41:38.896697	0	3.0
54069	2021-07-31 20:41:43.942949	0	3.0
54070	2021-07-31 20:41:48.988826	0	3.0
54071	2021-07-31 20:41:54.036532	0	101.0
54072	2021-07-31 20:41:59.082011	0	3.0
54073	2021-07-31 20:42:04.318441	1	14887.0
54074	2021-07-31 20:42:09.355599	0	22212.0
54075	2021-07-31 20:42:14.399566	0	3.0
54076	2021-07-31 20:42:19.441975	0	3.0
54077	2021-07-31 20:42:24.490862	0	3.0
54078	2021-07-31 20:42:29.529205	0	3.0
54079	2021-07-31 20:42:34.570573	0	3.0
54080	2021-07-31 20:42:39.614961	0	3.0
54081	2021-07-31 20:42:44.649093	0	3.0
54082	2021-07-31 20:42:49.686644	0	3.0
54083	2021-07-31 20:42:54.731378	0	3.0
54163	2021-07-31 20:49:38.159437	0	3.0
54164	2021-07-31 20:49:43.197533	0	3.0
54165	2021-07-31 20:49:48.246682	0	3.0
54166	2021-07-31 20:49:53.292404	0	3.0
54167	2021-07-31 20:49:58.328956	0	3.0
54168	2021-07-31 20:50:03.379277	0	3.0
54169	2021-07-31 20:50:08.430528	0	3.0
54170	2021-07-31 20:50:13.477263	0	3.0
54171	2021-07-31 20:50:18.513714	0	3.0
54172	2021-07-31 20:50:23.577234	1	27310.0
54173	2021-07-31 20:50:28.625374	0	10039.0
54174	2021-07-31 20:50:33.666448	0	3.0
54175	2021-07-31 20:50:38.703454	0	3.0
54176	2021-07-31 20:50:43.744886	0	3.0
54177	2021-07-31 20:50:48.795874	0	3.0
54178	2021-07-31 20:50:53.835146	0	3.0
54179	2021-07-31 20:50:58.874545	0	3.0
54180	2021-07-31 20:51:03.925490	0	7.0
54181	2021-07-31 20:51:08.968132	0	3.0
54182	2021-07-31 20:51:14.003991	0	3.0
54229	2021-07-31 20:55:11.082082	0	3.0
54230	2021-07-31 20:55:16.120688	0	54.0
54231	2021-07-31 20:55:21.168795	0	101.0
54232	2021-07-31 20:55:26.212868	0	3.0
54233	2021-07-31 20:55:31.254342	0	3.0
54234	2021-07-31 20:55:36.301053	0	3.0
54235	2021-07-31 20:55:41.345556	0	3.0
54236	2021-07-31 20:55:46.381884	0	3.0
54237	2021-07-31 20:55:51.421372	0	3.0
54238	2021-07-31 20:55:56.463297	1	7051.0
54239	2021-07-31 20:56:01.509665	0	30145.0
54240	2021-07-31 20:56:06.556246	0	3.0
54241	2021-07-31 20:56:11.597015	0	3.0
54242	2021-07-31 20:56:16.645200	0	3.0
54243	2021-07-31 20:56:21.683593	0	3.0
54244	2021-07-31 20:56:26.717693	0	7.0
54245	2021-07-31 20:56:31.758470	0	3.0
54246	2021-07-31 20:56:36.803571	0	3.0
54247	2021-07-31 20:56:41.854796	0	3.0
54248	2021-07-31 20:56:46.900429	0	3.0
54805	2021-07-31 21:43:36.108037	0	3.0
54806	2021-07-31 21:43:41.144378	0	3.0
54807	2021-07-31 21:43:46.191140	0	3.0
54808	2021-07-31 21:43:51.233143	0	3.0
54809	2021-07-31 21:43:56.280837	0	3.0
54810	2021-07-31 21:44:01.328191	0	3.0
54811	2021-07-31 21:44:06.364126	0	3.0
54812	2021-07-31 21:44:11.411790	0	3.0
54813	2021-07-31 21:44:16.453672	0	7.0
54814	2021-07-31 21:44:21.517933	1	4778.0
54815	2021-07-31 21:44:26.562516	0	32300.0
54816	2021-07-31 21:44:31.607145	0	3.0
54817	2021-07-31 21:44:36.653283	0	3.0
54818	2021-07-31 21:44:41.698590	0	3.0
54819	2021-07-31 21:44:46.737863	0	3.0
54820	2021-07-31 21:44:51.776606	0	3.0
54821	2021-07-31 21:44:56.815330	0	3.0
54822	2021-07-31 21:45:01.855800	0	3.0
54823	2021-07-31 21:45:06.900930	0	3.0
54824	2021-07-31 21:45:11.941525	0	3.0
55493	2021-07-31 22:41:25.846076	0	7.0
55494	2021-07-31 22:41:30.879824	0	3.0
55495	2021-07-31 22:41:35.922495	0	3.0
55496	2021-07-31 22:41:40.963051	0	3.0
55497	2021-07-31 22:41:46.005861	0	3.0
55498	2021-07-31 22:41:51.044725	0	3.0
55499	2021-07-31 22:41:56.093734	0	3.0
55500	2021-07-31 22:42:01.140088	0	3.0
55501	2021-07-31 22:42:06.184680	0	3.0
55502	2021-07-31 22:42:11.388801	1	16139.0
55503	2021-07-31 22:42:16.423016	0	20999.0
55504	2021-07-31 22:42:21.461962	0	3.0
55505	2021-07-31 22:42:26.504479	0	3.0
55506	2021-07-31 22:42:31.540985	0	3.0
55507	2021-07-31 22:42:36.585460	0	3.0
55508	2021-07-31 22:42:41.623962	0	3.0
55509	2021-07-31 22:42:46.664808	0	3.0
55510	2021-07-31 22:42:51.707006	0	3.0
55511	2021-07-31 22:42:56.749631	0	3.0
55512	2021-07-31 22:43:01.799842	0	3.0
55527	2021-07-31 22:44:17.431452	0	3.0
55528	2021-07-31 22:44:22.477773	0	3.0
55529	2021-07-31 22:44:27.515042	0	3.0
55530	2021-07-31 22:44:32.555409	0	92.0
55531	2021-07-31 22:44:37.595447	0	3.0
55532	2021-07-31 22:44:42.630896	0	3.0
55533	2021-07-31 22:44:47.671727	0	3.0
55534	2021-07-31 22:44:52.705442	0	3.0
55535	2021-07-31 22:44:57.746825	0	3.0
55536	2021-07-31 22:45:02.837932	1	24899.0
55537	2021-07-31 22:45:07.883842	0	12281.0
55538	2021-07-31 22:45:12.932422	0	3.0
55539	2021-07-31 22:45:17.972353	0	3.0
55540	2021-07-31 22:45:23.017463	0	3.0
55541	2021-07-31 22:45:28.055985	0	3.0
55542	2021-07-31 22:45:33.092223	0	3.0
55543	2021-07-31 22:45:38.133869	0	3.0
55544	2021-07-31 22:45:43.174664	0	3.0
55545	2021-07-31 22:45:48.219382	0	3.0
55546	2021-07-31 22:45:53.259908	0	3.0
55762	2021-07-31 23:04:02.593045	0	3.0
55763	2021-07-31 23:04:07.645118	0	31.0
55764	2021-07-31 23:04:12.681523	0	3.0
55765	2021-07-31 23:04:17.732697	0	7.0
55766	2021-07-31 23:04:22.775204	0	3.0
55767	2021-07-31 23:04:27.826120	0	3.0
55768	2021-07-31 23:04:32.866376	0	3.0
55769	2021-07-31 23:04:37.907918	0	3.0
55770	2021-07-31 23:04:42.946307	0	3.0
55771	2021-07-31 23:04:47.989096	1	6715.0
55772	2021-07-31 23:04:53.030472	0	30557.0
55773	2021-07-31 23:04:58.078943	0	3.0
55774	2021-07-31 23:05:03.116994	0	3.0
55775	2021-07-31 23:05:08.153560	0	3.0
55776	2021-07-31 23:05:13.199355	0	3.0
55777	2021-07-31 23:05:18.235006	0	3.0
55778	2021-07-31 23:05:23.283992	0	3.0
55779	2021-07-31 23:05:28.318225	0	3.0
55780	2021-07-31 23:05:33.355363	0	3.0
55781	2021-07-31 23:05:38.404439	0	3.0
55796	2021-07-31 23:06:54.063642	0	3.0
55797	2021-07-31 23:06:59.107126	0	3.0
55798	2021-07-31 23:07:04.150371	0	3.0
55799	2021-07-31 23:07:09.187833	0	3.0
55800	2021-07-31 23:07:14.232987	0	3.0
55801	2021-07-31 23:07:19.278012	0	3.0
55802	2021-07-31 23:07:24.314279	0	3.0
55803	2021-07-31 23:07:29.360625	0	3.0
55804	2021-07-31 23:07:34.402832	0	3.0
55805	2021-07-31 23:07:39.445704	1	23315.0
55806	2021-07-31 23:07:44.493962	0	13790.0
55807	2021-07-31 23:07:49.527448	0	3.0
55808	2021-07-31 23:07:54.574021	0	3.0
55809	2021-07-31 23:07:59.616592	0	3.0
55810	2021-07-31 23:08:04.664614	0	7.0
55811	2021-07-31 23:08:09.711407	0	3.0
55812	2021-07-31 23:08:14.760718	0	3.0
55813	2021-07-31 23:08:19.802708	0	3.0
55814	2021-07-31 23:08:24.842855	0	3.0
55815	2021-07-31 23:08:29.878147	0	3.0
56930	2021-08-01 00:42:12.552768	0	3.0
56931	2021-08-01 00:42:17.592857	0	3.0
56932	2021-08-01 00:42:22.641272	0	3.0
56933	2021-08-01 00:42:27.685310	0	3.0
56934	2021-08-01 00:42:32.722482	0	3.0
56935	2021-08-01 00:42:37.766597	0	3.0
56936	2021-08-01 00:42:42.815659	0	3.0
56937	2021-08-01 00:42:47.859255	0	3.0
56938	2021-08-01 00:42:52.905672	0	3.0
56939	2021-08-01 00:42:57.966511	1	29364.0
56940	2021-08-01 00:43:03.015176	0	7744.0
56941	2021-08-01 00:43:08.065409	0	3.0
56942	2021-08-01 00:43:13.105372	0	3.0
56943	2021-08-01 00:43:18.163878	0	3.0
56944	2021-08-01 00:43:23.214387	0	3.0
56945	2021-08-01 00:43:28.260664	0	3.0
56946	2021-08-01 00:43:33.308118	0	3.0
56947	2021-08-01 00:43:38.348360	0	3.0
56948	2021-08-01 00:43:43.395688	0	3.0
56949	2021-08-01 00:43:48.445725	0	7.0
57856	2021-08-01 02:00:02.187246	0	3.0
57857	2021-08-01 02:00:07.222643	0	3.0
57858	2021-08-01 02:00:12.270008	0	3.0
57859	2021-08-01 02:00:17.313371	0	3.0
57860	2021-08-01 02:00:22.347654	0	3.0
57861	2021-08-01 02:00:27.387005	0	3.0
57862	2021-08-01 02:00:32.436976	0	3.0
57863	2021-08-01 02:00:37.477495	0	3.0
57864	2021-08-01 02:00:42.524639	0	3.0
57865	2021-08-01 02:00:47.574503	1	24038.0
57866	2021-08-01 02:00:52.613833	0	13331.0
57867	2021-08-01 02:00:57.663166	0	3.0
57868	2021-08-01 02:01:02.713499	0	3.0
57869	2021-08-01 02:01:07.761851	0	3.0
57870	2021-08-01 02:01:12.808233	0	3.0
57871	2021-08-01 02:01:17.846159	0	32.0
57872	2021-08-01 02:01:22.896133	0	3.0
57873	2021-08-01 02:01:27.945104	0	7.0
57874	2021-08-01 02:01:32.982871	0	3.0
57875	2021-08-01 02:01:38.031133	0	3.0
59124	2021-08-01 03:46:36.648883	0	3.0
59125	2021-08-01 03:46:41.687220	0	3.0
59126	2021-08-01 03:46:46.725321	0	3.0
59127	2021-08-01 03:46:51.760440	0	3.0
59128	2021-08-01 03:46:56.805514	0	3.0
59129	2021-08-01 03:47:01.843455	0	3.0
59130	2021-08-01 03:47:06.891701	0	3.0
59131	2021-08-01 03:47:11.940860	0	3.0
59132	2021-08-01 03:47:16.982683	0	3.0
59133	2021-08-01 03:47:22.027226	1	23351.0
59134	2021-08-01 03:47:27.062377	0	13881.0
59135	2021-08-01 03:47:32.102810	0	3.0
59136	2021-08-01 03:47:37.142782	0	3.0
59137	2021-08-01 03:47:42.185726	0	3.0
59138	2021-08-01 03:47:47.233275	0	3.0
59139	2021-08-01 03:47:52.269241	0	3.0
59140	2021-08-01 03:47:57.317056	0	3.0
59141	2021-08-01 03:48:02.367300	0	3.0
59142	2021-08-01 03:48:07.404062	0	3.0
59143	2021-08-01 03:48:12.448436	0	3.0
61155	2021-08-01 06:37:18.596586	0	3.0
61156	2021-08-01 06:37:23.633768	0	3.0
61157	2021-08-01 06:37:28.674879	0	3.0
61158	2021-08-01 06:37:33.717516	0	3.0
61159	2021-08-01 06:37:38.754070	0	3.0
61160	2021-08-01 06:37:43.798196	0	3.0
61161	2021-08-01 06:37:48.843256	0	3.0
61162	2021-08-01 06:37:53.886789	0	3.0
61163	2021-08-01 06:37:58.927984	0	3.0
61164	2021-08-01 06:38:03.989786	1	18094.0
61165	2021-08-01 06:38:09.026568	0	19287.0
61166	2021-08-01 06:38:14.067805	0	3.0
61167	2021-08-01 06:38:19.104398	0	3.0
61168	2021-08-01 06:38:24.153140	0	3.0
61169	2021-08-01 06:38:29.186093	0	3.0
61170	2021-08-01 06:38:34.233853	0	3.0
61171	2021-08-01 06:38:39.274004	0	3.0
61172	2021-08-01 06:38:44.316561	0	3.0
61173	2021-08-01 06:38:49.355131	0	3.0
61174	2021-08-01 06:38:54.399621	0	3.0

ciOdJqG.png


Your POH appears to now be advancing about 2 hours for each 6 realtime hours. That 1:3 ratio is a little faster than my ssd prior to the selftests regime: it was about 1:4 during its first 5.5 months of service.

Here is actual data.

Code:
index	datetime	power_on_hours_count
1846	2021-07-28 18:51:56.676728	11279
3937	2021-07-28 21:47:46.006299	11280
9222	2021-07-29 05:11:54.526332	11281
15950	2021-07-29 14:37:26.621586	11282
19205	2021-07-29 19:37:00.372970	11283
23404	2021-07-30 01:29:54.641829	11284
28449	2021-07-30 08:33:45.949258	11285
32496	2021-07-30 14:13:47.230012	11286
35113	2021-07-30 17:55:33.783826	11287
37475	2021-07-30 21:14:09.098337	11288
39856	2021-07-31 00:34:18.243345	11289
42389	2021-07-31 04:07:12.861572	11290
44415	2021-07-31 06:57:34.134723	11291
46231	2021-07-31 09:30:13.814657	11292
47382	2021-07-31 11:19:31.685747	11293
49651	2021-07-31 14:30:14.861469	11294
52537	2021-07-31 18:32:58.399887	11295
54668	2021-07-31 21:32:05.291792	11296
56620	2021-08-01 00:16:09.301351	11297
58419	2021-08-01 02:47:21.513517	11298
60828	2021-08-01 06:09:49.571591	11299

Based on the temperature spike at approximately 16:00pm on 7/30, I presume that's when you filled the free space.
Intense read/write activity is indeed associated with a temperature increase.

I would tentatively assume the new F8 pattern is a temporary effect. The large amount of fast F7 writing to fill the free space might require a lot of SLC-to-TLC conversion, in addition to triggering some wear-leveling. How much free space was there before the filling?

Differential WAF is always meaningless on short time scales, due to the large delays between host writes and the eventual FTL writes that the host writes trigger much later.
Before the filling there were about 120 GB free. I filled free space until it reached 0 bytes left. I find unlikely that there is still SLC-to-TLC flushing to be performed.

The average WAF since I deleted the dummy random data after disabling TRIM is currently 1.13 and it increased solely due to the FTL write spikes that are still occurring to some extent. However, for now this is still well within normal ranges and if this rate was maintained (or even reached values in the order of 1.5 which would still be quite normal) on the long term, wear rate would not be a concern anymore.

However due to the lack of TRIM (which I don't know how to selectively disable on Windows), sustained write performance may suffer, and NAND wear may get uneven and lead to possible issues on the long term.

Well, learning that the SATA data cable was connected shot down a fine theory.

We can't be certain that Windows mostly ignores an ssd that has no Windows partitions mounted, since the hardware doesn't prevent Windows from reading (or writing) the drive's partition table whenever Windows wants to. But I wouldn't expect Windows to keep re-reading the partition table at a rate that would prevent the ssd from spending most of its time in low power mode. Were there any third party drive monitoring apps running, or drivers, that might foolishly keep polling an unmounted drive?

Linux partitions were seen as "raw" by the OS. Windows generally doesn't try to touch those partitions as far as I am aware of.

I didn't have any program that was actively and frequently polling the drives.

Although it's odd behavior, I don't think it's important to understand it, since I can't think of a reason why anyone would need or want to keep an unmounted drive attached and powered for a long time.

Having a completely different operating system ready for dual booting can be one reason. The power consumption penalty would be minimal anyway.
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
I don't think any software can measure the speed from the SSD's internal cache to the NAND. The SSD in general should be seen as a black box by the operating system.

HWInfo is showing max speeds similar to CrystalDiskMark because I was indeed using CrystalDiskMark. The values are close to specifications, although I don't remember ever seeing write speeds close to 500 MB/s. https://www.crucial.com/content/dam.../crucial-mx500-ssd-productflyer-letter-en.pdf

When using different benchmarking applications which allow to not bypass the system/OS cache, speeds in the order of a few GB/s can be easily observed through them. I think HWInfo gets its values from internal Windows performance counters which are measuring speeds directly to/from the hardware, though.

Yes, "black box" is the point I was trying to make: we can't tell which reads from ssd would be expected to postpone or suppress F8 writing. Some of the reads might not involve reading from ssd NAND because the data is in ssd cache, so the ssd might be able to service reads from ssd cache simultaneously with F8 writing to NAND. Since the transfer rate is bottlenecked by the SATA interface, the transfer rate isn't a clue that would allow reads from ssd cache to be distinguished from reads from ssd NAND... the black box is truly black.

Here's my CrystalDiskMark result with selftest running, on 4/18/2020. The sequential read & write speeds (with Q=8), 564 MB/s & 520 MB/s, slightly exceeded Crucial's specs, 560 MB/s & 510 MB/s:
CrystalDiskMark 7.0.0 x64 (C) 2007-2019 hiyohiyo Crystal Dew World: https://crystalmark.info/ ------------------------------------------------------------------------------ * MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s] * KB = 1000 bytes, KiB = 1024 bytes [Read] Sequential 1MiB (Q= 8, T= 1): 564.091 MB/s [ 538.0 IOPS] < 14842.31 us> Sequential 1MiB (Q= 1, T= 1): 541.069 MB/s [ 516.0 IOPS] < 1936.56 us> Random 4KiB (Q= 32, T=16): 406.167 MB/s [ 99161.9 IOPS] < 5137.78 us> Random 4KiB (Q= 1, T= 1): 35.740 MB/s [ 8725.6 IOPS] < 114.20 us> [Write] Sequential 1MiB (Q= 8, T= 1): 520.675 MB/s [ 496.6 IOPS] < 16026.44 us> Sequential 1MiB (Q= 1, T= 1): 499.781 MB/s [ 476.6 IOPS] < 2096.14 us> Random 4KiB (Q= 32, T=16): 380.489 MB/s [ 92892.8 IOPS] < 5489.37 us> Random 4KiB (Q= 1, T= 1): 90.045 MB/s [ 21983.6 IOPS] < 45.20 us> Profile: Default Test: 256 MiB (x3) [Interval: 5 sec] <DefaultAffinity=DISABLED> Date: 2020/04/18 9:39:21 OS: Windows 10 [10.0 Build 18362] (x64)

I was implying that by doing what I did (although just stop sending TRIM commands immediately triggers a change), the SSD's wear leveling algorithms likely changed to a much less aggressive behavior where shuffling data around is only done when strictly necessary, not that wear leveling is entirely disabled.

The excessive wear observed in this thread could possibly be mitigated if this was also associated with overly aggressive algorithms originally intended by the manufacturer to keep the SSD always fast and responsive. In the past I observed a reduction of that behavior by disabling TRIM, but it was still not eliminated.

I meant that I wouldn't say the bug has been disabled. Frustrated due to lack of free NAND.

I see now that clarification is needed, because I took you literally when you wrote that you filled all free space... I didn't think about the portion of ssd capacity reserved for "overprovisioning." Did you fill it too, by setting overprovisioning to zero before filling the ssd? How much of your ssd is currently reserved for overprovisioning?

If there's still a lot of NAND reserved for overprovisioning, then my interpretation that "filling and deleting untrimmed" left negligible free space available for F8 writing was based on my misunderstanding of what you did.

They are still about 37000 pages. If you prefer csv data:
Code:
index    datetime    current_pending_ecc_count    ftl_program_page_count_delta
-snip-
46472    2021-07-31 09:56:14.002185    0    3.0
46473    2021-07-31 09:56:19.044614    1    10491.0
46474    2021-07-31 09:56:24.085816    0    26600.0
46475    2021-07-31 09:56:29.177205    0    11.0
-snip-
48172    2021-07-31 12:25:56.017182    0    3.0
48173    2021-07-31 12:26:01.073939    1    8739.0
48174    2021-07-31 12:26:06.113414    0    28457.0
48175    2021-07-31 12:26:11.153230    0    3.0
-snip-
49379    2021-07-31 14:07:23.140963    0    3.0
49380    2021-07-31 14:07:28.200603    1    17352.0
49381    2021-07-31 14:07:33.247455    0    19952.0
49382    2021-07-31 14:07:38.285425    0    3.0
-snip-
54072    2021-07-31 20:41:59.082011    0    3.0
54073    2021-07-31 20:42:04.318441    1    14887.0
54074    2021-07-31 20:42:09.355599    0    22212.0
54075    2021-07-31 20:42:14.399566    0    3.0
-snip-
54171    2021-07-31 20:50:18.513714    0    3.0
54172    2021-07-31 20:50:23.577234    1    27310.0
54173    2021-07-31 20:50:28.625374    0    10039.0
54174    2021-07-31 20:50:33.666448    0    3.0
-snip-
54237    2021-07-31 20:55:51.421372    0    3.0
54238    2021-07-31 20:55:56.463297    1    7051.0
54239    2021-07-31 20:56:01.509665    0    30145.0
54240    2021-07-31 20:56:06.556246    0    3.0
-snip-
54813    2021-07-31 21:44:16.453672    0    7.0
54814    2021-07-31 21:44:21.517933    1    4778.0
54815    2021-07-31 21:44:26.562516    0    32300.0
54816    2021-07-31 21:44:31.607145    0    3.0
-snip-
55501    2021-07-31 22:42:06.184680    0    3.0
55502    2021-07-31 22:42:11.388801    1    16139.0
55503    2021-07-31 22:42:16.423016    0    20999.0
55504    2021-07-31 22:42:21.461962    0    3.0
-snip-
55535    2021-07-31 22:44:57.746825    0    3.0
55536    2021-07-31 22:45:02.837932    1    24899.0
55537    2021-07-31 22:45:07.883842    0    12281.0
55538    2021-07-31 22:45:12.932422    0    3.0
-snip-
55770    2021-07-31 23:04:42.946307    0    3.0
55771    2021-07-31 23:04:47.989096    1    6715.0
55772    2021-07-31 23:04:53.030472    0    30557.0
55773    2021-07-31 23:04:58.078943    0    3.0
-snip-
55804    2021-07-31 23:07:34.402832    0    3.0
55805    2021-07-31 23:07:39.445704    1    23315.0
55806    2021-07-31 23:07:44.493962    0    13790.0
55807    2021-07-31 23:07:49.527448    0    3.0
-snip-
56938    2021-08-01 00:42:52.905672    0    3.0
56939    2021-08-01 00:42:57.966511    1    29364.0
56940    2021-08-01 00:43:03.015176    0    7744.0
56941    2021-08-01 00:43:08.065409    0    3.0
-snip-
57864    2021-08-01 02:00:42.524639    0    3.0
57865    2021-08-01 02:00:47.574503    1    24038.0
57866    2021-08-01 02:00:52.613833    0    13331.0
57867    2021-08-01 02:00:57.663166    0    3.0
-snip-
59132    2021-08-01 03:47:16.982683    0    3.0
59133    2021-08-01 03:47:22.027226    1    23351.0
59134    2021-08-01 03:47:27.062377    0    13881.0
59135    2021-08-01 03:47:32.102810    0    3.0
-snip-
61163    2021-08-01 06:37:58.927984    0    3.0
61164    2021-08-01 06:38:03.989786    1    18094.0
61165    2021-08-01 06:38:09.026568    0    19287.0
61166    2021-08-01 06:38:14.067805    0    3.0

(In your csv above, I deleted the rows where C5=0 and deltaF8 is tiny, except for the single rows immediately before and immediately after deltaF8 bursts.)

Yes, 37000 pages per F8 burst. In that csv data, the sampling split each burst among two consecutive 5-second samples, which explains why C5=1 at the points of splitting and only at those points... those are the points when a burst was ongoing at the moment of sampling. The samples that include the trailing portion of each burst have C5=0 because the burst had ended before the sampling.

Unlike the csv, the graph appears to include a few bursts where all 37000 pages were written during a single sample interval (not split among two consecutive samples like in the csv). Because those bursts started after a sampling and ended before the next sampling, C5 was 0 at the moment of each sampling. In other words, the 5-second sampling failed to capture the corresponding C5=1 that I assume happened between samplings. This is an example where the 5-second sampling rate is too slow and can be misleading, and forces us to make an assumption about what happened between samplings.

Here is actual data.
Code:
index    datetime    power_on_hours_count
-snip-
52537    2021-07-31 18:32:58.399887    11295
54668    2021-07-31 21:32:05.291792    11296
56620    2021-08-01 00:16:09.301351    11297
58419    2021-08-01 02:47:21.513517    11298
60828    2021-08-01 06:09:49.571591    11299

I infer that the posted data is the subset of samples where POH incremented. The rate of POH increments is very irregular, presumably caused by variations in pc load, and maybe also by variations in F8 writing caused by variations in F7 writing. And also maybe by a side-effect of the filling. The most recent 12 POH increments -- after the filling on 7/30 -- took about 36 hours, a ratio of about 1:3. The 7 increments before the filling took about 44 hours, a ratio slightly less than 1:6.

To try to understand the acceleration of POH after the filling, I think it would help to examine the values of F7 & F8 at several points. In particular, their values at the beginning and ending of the POH samples above, and their values immediately before and after the filling. If F7 and/or F8 accelerated after the filling, that would reduce the time spent in low power mode, and thus accelerate POH.

Do you also sample Sectors Read By Host? It's not in the basic SMART data; it's in the extended section of the "smartctl -x" output. Reading too reduces the time spent in low power mode, so it might be relevant to understanding POH.

Intense read/write activity is indeed associated with a temperature increase.

Before the filling there were about 120 GB free. I filled free space until it reached 0 bytes left. I find unlikely that there is still SLC-to-TLC flushing to be performed.

Why unlikely? What is the rate of SLC-to-TLC flushing, and how much of the 120 GB do you think was written in SLC mode? I would expect most of the 120 GB to have been written in SLC mode, which would lead to a LOT of flushing.

Thinking about SLC-to-TLC flushing... I assume it needs at least one free block available in order to get started, and overprovisioning would provide some free blocks. Then, after getting started, every 3 SLC blocks that are flushed free up 3 blocks but consume 1 new (TLC) block... a net freeing of 2 blocks. The ssd surely knows those 2 blocks are free. (They wouldn't need a TRIM command by the OS to mark them.) So, the more SLC blocks flushed, the more space is freed.

Did you check the free space again some time after the filling -- after you think the flushing would have completed -- to see whether the ssd was still filled? How much of the ssd became unfilled due to flushing? (Available blocks -- in addition to the overprovision blocks-- for the F8 bug to burn up.)

The average WAF since I deleted the dummy random data after disabling TRIM is currently 1.13 and it increased solely due to the FTL write spikes that are still occurring to some extent. However, for now this is still well within normal ranges and if this rate was maintained (or even reached values in the order of 1.5 which would still be quite normal) on the long term, wear rate would not be a concern anymore.

However due to the lack of TRIM (which I don't know how to selectively disable on Windows), sustained write performance may suffer, and NAND wear may get uneven and lead to possible issues on the long term.

Yes, leaving TRIM disabled doesn't seem like a good way to mitigate the bug.

Do you think there's something wrong with the selftests solution? My assumption is that the 30 seconds pauses between selftests provide sufficient time for the ssd's low priority routines to keep the ssd healthy and performant. If those routines were being starved of runtime by the selftests, I would expect high deltaF8 during many more of the pauses. (Most of the pauses have tiny deltaF8.)

The unexplained correlation of increased F8 writing when the ssd hasn't been power-cycled for many days is the only observation that gives me doubt about the safety of the selftests regime. When you observed that correlation, was it only while running the background reading load that you use to tame the WAF bug?

Linux partitions were seen as "raw" by the OS. Windows generally doesn't try to touch those partitions as far as I am aware of. I didn't have any program that was actively and frequently polling the drives.

As I wrote earlier, a drive-monitoring app would have to have been written by fools if it polls an umounted drive at a much higher rate than a mounted drive. Which makes me wonder whether you were running Crucial's Storage Executive while POH was realtime-ish on the unmounted ssd. I don't have a reason to suspect Storage Executive, other than guilt by association. (Lack of respect for Crucial's software developers.)

Having a completely different operating system ready for dual booting can be one reason. The power consumption penalty would be minimal anyway.

Okay, dual booting is a reason. I'm not familiar with the advantages of dual booting. I played with a virtual Linux machine running under Windows Vista on my previous, much slower pc, instead of dual booting. My current pc is probably fast enough for Windows and a virtual Linux to run well simultaneously. (Eventually, after I become adept at Linux, I might want to reverse that hierarchy: Linux as host OS, plus a virtual Windows machine to run any apps that need genuine Windows.)

Can't Linux and Windows share a data partition, if formatted using a format understood by both OSes? For efficiency, if I wanted to dual boot I think that's how I would use storage drives, so that Linux apps and Windows apps could easily share data. I store very little app data in my Windows system partition, and I would want to put a Linux system in a small partition separate from app data. I'd probably place both the Windows system partition and the Linux system partition on the same ssd, and the secondary ssd would have a single large partition that serves both OSes. Unless I learn this scheme is impossible or has a serious disadvantage.

An experiment to try: Use smartctl for a few days to collect data from an unmounted ssd -- deltaSectorsRead, deltaF7, deltaF8, C5, deltaPOH, etc -- to test the assumption that Windows and its apps don't read or write an unmounted ssd.

Another experiment: Repartition the Linux ssd so it will also contain a tiny partition mounted by Windows (with no apps writing to it; just a file system with no files), to see whether that affects its WAF and POH. If it causes the POH rate to slow, indicating reduced power consumption, that small wasted partition might be a reasonable tradeoff.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
I see now that clarification is needed, because I took you literally when you wrote that you filled all free space... I didn't think about the portion of ssd capacity reserved for "overprovisioning." Did you fill it too, by setting overprovisioning to zero before filling the ssd? How much of your ssd is currently reserved for overprovisioning?

If there's still a lot of NAND reserved for overprovisioning, then my interpretation that "filling and deleting untrimmed" left negligible free space available for F8 writing was based on my misunderstanding of what you did.

I filled all the user-available space on the SSD except that in the ~900 megabytes of the EFI+Recovery partitions.

No additional overprovisioning has been set besides what the SSD inherently has. Assuming 512 GiB of physical NAND and 476.83 GiB of user space (500 GB), that's about 35.16 GiB of overprovisioning space that the SSD has at complete disposal for internal purposes including wear leveling and so on.

Yes, 37000 pages per F8 burst. In that csv data, the sampling split each burst among two consecutive 5-second samples, which explains why C5=1 at the points of splitting and only at those points... those are the points when a burst was ongoing at the moment of sampling. The samples that include the trailing portion of each burst have C5=0 because the burst had ended before the sampling.

Unlike the csv, the graph appears to include a few bursts where all 37000 pages were written during a single sample interval (not split among two consecutive samples like in the csv). Because those bursts started after a sampling and ended before the next sampling, C5 was 0 at the moment of each sampling. In other words, the 5-second sampling failed to capture the corresponding C5=1 that I assume happened between samplings. This is an example where the 5-second sampling rate is too slow and can be misleading, and forces us to make an assumption about what happened between samplings.

It could be that a faster sampling rate helps, but when I tried it at 1 sample/s I did not always see all the 37000 pages event being associated with C5=1.

I infer that the posted data is the subset of samples where POH incremented. The rate of POH increments is very irregular, presumably caused by variations in pc load, and maybe also by variations in F8 writing caused by variations in F7 writing. And also maybe by a side-effect of the filling. The most recent 12 POH increments -- after the filling on 7/30 -- took about 36 hours, a ratio of about 1:3. The 7 increments before the filling took about 44 hours, a ratio slightly less than 1:6.

Yes, this is a subset of data where POH incremented. I now have more interesting data:

Code:
index	datetime	power_on_hours_count
58419	2021-08-01 02:47:21.513517	11298
60828	2021-08-01 06:09:49.571591	11299
63353	2021-08-01 09:42:03.980420	11300
66881	2021-08-01 14:38:52.917048	11301
67578	2021-08-01 15:38:51.191942	11302
68292	2021-08-01 16:38:51.392299	11303
69006	2021-08-01 17:38:51.061430	11304
69720	2021-08-01 18:38:51.348980	11305

IBH16BG.png


The counter is now increasing once every hour. This isn't due to drive activity. I disabled APM (Advanced Power Management) for SATA devices from the Windows Registry. Apparently Windows forces it enabled on the system drive. This might solve one mystery.

Some sources:

xhttps://dries.metrico.be/2019/09/11/control-your-hdds-aam-apm-through-registry/
xhttps://msfn.org/board/topic/140404-control-your-hdds-aamapm-through-registry/
xhttps://www.techpowerup.com/forums...gressive-apm-on-windows-10-build-1809.254023/


EDIT: after more testing it appears it was a different thing I set up which made the POH counter work as intended. See next post.

Do you also sample Sectors Read By Host? It's not in the basic SMART data; it's in the extended section of the "smartctl -x" output. Reading too reduces the time spent in low power mode, so it might be relevant to understanding POH.
No, I am not sampling other data in addition to the basic SMART attributes.

Why unlikely? What is the rate of SLC-to-TLC flushing, and how much of the 120 GB do you think was written in SLC mode? I would expect most of the 120 GB to have been written in SLC mode, which would lead to a LOT of flushing.

Thinking about SLC-to-TLC flushing... I assume it needs at least one free block available in order to get started, and overprovisioning would provide some free blocks. Then, after getting started, every 3 SLC blocks that are flushed free up 3 blocks but consume 1 new (TLC) block... a net freeing of 2 blocks. The ssd surely knows those 2 blocks are free. (They wouldn't need a TRIM command by the OS to mark them.) So, the more SLC blocks flushed, the more space is freed.

Did you check the free space again some time after the filling -- after you think the flushing would have completed -- to see whether the ssd was still filled? How much of the ssd became unfilled due to flushing? (Available blocks -- in addition to the overprovision blocks-- for the F8 bug to burn up.)
If every GB in SLC requires 3GB of native TLC NAND, then the maximum possible amount of SLC data that can be flushed when the drive is completely filled is what can fit in the overprovisioning space, i.e. 35.16 gigabytes (let's say 36) divided by 3 = 12 GB. This is assuming that such entire space can be dedicated to this caching algorithm, which is likely not the case since it will be needed for other purposes too.

More than 12 * 37000 FTL pages events have occurred since I filled the drive. By now any SLC-cached data accumulated during that process will have been flushed.

Yes, leaving TRIM disabled doesn't seem like a good way to mitigate the bug.

Do you think there's something wrong with the selftests solution? My assumption is that the 30 seconds pauses between selftests provide sufficient time for the ssd's low priority routines to keep the ssd healthy and performant. If those routines were being starved of runtime by the selftests, I would expect high deltaF8 during many more of the pauses. (Most of the pauses have tiny deltaF8.)

Disabling TRIM also just slightly mitigates it. I think my drive is progressively getting worse as observed in the past, with more FTL write events occurring in larger bursts.

As for the self tests, possibly they might decrease the response time of the SSD but I don't think they should be harmful on their own—unless such FTL write spikes are in reality a necessary internal measure for long-term data reliability. No way to know for sure, though.

The self-test (or other user-initiated tasks) could be pausing some sort of internal background analysis which continuously checks whether the data stored on the SSD is accumulating too many errors, triggering a FTL write event if something is out of place. This background analysis would possibly run at low speed for power saving reasons, and therefore allowing 30 seconds between the self-tests might not be enough for the SSD to determine whether something leading to the FTL write bursts needs to be done. This is just a hypothesis, of course.

The unexplained correlation of increased F8 writing when the ssd hasn't been power-cycled for many days is the only observation that gives me doubt about the safety of the selftests regime. When you observed that correlation, was it only while running the background reading load that you use to tame the WAF bug?
I observed it on my PC where I didn't run continuous background reading tasks. I noticed it because I often had uptimes of weeks and the difference before/after power cycling was visible.

As I wrote earlier, a drive-monitoring app would have to have been written by fools if it polls an umounted drive at a much higher rate than a mounted drive. Which makes me wonder whether you were running Crucial's Storage Executive while POH was realtime-ish on the unmounted ssd. I don't have a reason to suspect Storage Executive, other than guilt by association. (Lack of respect for Crucial's software developers.)
No, I only use smartmontools.

Okay, dual booting is a reason. I'm not familiar with the advantages of dual booting. I played with a virtual Linux machine running under Windows Vista on my previous, much slower pc, instead of dual booting. My current pc is probably fast enough for Windows and a virtual Linux to run well simultaneously. (Eventually, after I become adept at Linux, I might want to reverse that hierarchy: Linux as host OS, plus a virtual Windows machine to run any apps that need genuine Windows.)

Can't Linux and Windows share a data partition, if formatted using a format understood by both OSes? For efficiency, if I wanted to dual boot I think that's how I would use storage drives, so that Linux apps and Windows apps could easily share data. I store very little app data in my Windows system partition, and I would want to put a Linux system in a small partition separate from app data. I'd probably place both the Windows system partition and the Linux system partition on the same ssd, and the secondary ssd would have a single large partition that serves both OSes. Unless I learn this scheme is impossible or has a serious disadvantage.

Dual booting with operating systems in different drives is much simpler and more reliable, with no risk of either OS preventing in some way the other from booting. This is the main advantage.

Linux can read Windows NTFS partitions, but Windows cannot natively read Linux partitions (Ext4, Btrfs, etc) although third-party drivers exist for this.

An experiment to try: Use smartctl for a few days to collect data from an unmounted ssd -- deltaSectorsRead, deltaF7, deltaF8, C5, deltaPOH, etc -- to test the assumption that Windows and its apps don't read or write an unmounted ssd.
It would be a world-shaking event if Windows wrote on unmounted partitions.

Another experiment: Repartition the Linux ssd so it will also contain a tiny partition mounted by Windows (with no apps writing to it; just a file system with no files), to see whether that affects its WAF and POH. If it causes the POH rate to slow, indicating reduced power consumption, that small wasted partition might be a reasonable tradeoff.
Only the MX500 was affected by the slow POH issue, and currently I'm using it for Windows.

The POH problem might have been solved above. It seems unrelated with the WAF problem, although perhaps more time will be needed to make sure.
 
Last edited:

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
The counter is now increasing once every hour. This isn't due to drive activity. I disabled APM (Advanced Power Management) for SATA devices from the Windows Registry. Apparently Windows forces it enabled on the system drive. This might solve one mystery.

For the record, I also set this on my Crucial MX500, but it shouldn't be maintained across reboots. EDIT: after some more testing it looks like this is what makes the power-on hours counter advance regularly.

Code:
PS C:\windows\system32> smartctl -s standby,250 /dev/sdc
smartctl 7.2 2020-07-11 r5076 [x86_64-w64-mingw32-w10-b19043] (CircleCI)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF ENABLE/DISABLE COMMANDS SECTION ===
Standby timer set to 250 (05:00:00, a vendor-specific minimum applies)

APM appears as disabled with the following command, but the standby time isn't listed here:

Code:
PS C:\windows\system32> smartctl -g all /dev/sdc
smartctl 7.2 2020-07-11 r5076 [x86_64-w64-mingw32-w10-b19043] (CircleCI)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

AAM feature is:   Unavailable
APM feature is:   Disabled
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, frozen [SEC2]
 
Last edited:

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
I filled all the user-available space on the SSD except that in the ~900 megabytes of the EFI+Recovery partitions.

No additional overprovisioning has been set besides what the SSD inherently has. Assuming 512 GiB of physical NAND and 476.83 GiB of user space (500 GB), that's about 35.16 GiB of overprovisioning space that the SSD has at complete disposal for internal purposes including wear leveling and so on.

I don't know what you mean by "inherently." I believe the MX500 has some overprovisioning by default -- I don't clearly recall the amount but I think it was about 5% to 10% of the ssd's total capacity -- and the user may increase it or decrease it (even to zero) using Storage Executive. I'll assume you mean your ssd still has the default amount of overprovisioning.

I don't believe the difference between the amount of storage cells and the amount of space available for file storage is entirely due to unused cells. Some of it is occupied and not available for wear-leveling. For example, some is occupied by OS overhead. I saw a debate about this at: https://forums.anandtech.com/thread...-anything-useful-for-typical-ssd-use.2563704/

While googling hoping to find the default MX500 overprovisioning, I read about another (low priority) process that could be the buggy one: garbage collection. GC looks for blocks comprised of a mix of trimmed and untrimmed pages, and copies their untrimmed pages to blocks that have no trimmed pages, so that the source blocks then consist entirely of trimmed pages. And GC erases source blocks that are entirely trimmed, so the pages of the block will be available for writing later. (Assuming I understand GC properly.)

I don't know how many of the pages of a block need to have been marked trimmed before the Garbage Collector will decide to move their untrimmed pages. Perhaps Crucial is very aggressive about it, and will move all the untrimmed pages from a mixed block even if only one (or a few) of the block's pages is(are) marked trimmed. Perhaps the write bursts are movements of untrimmed pages from mixed blocks. Perhaps this theory could be investigated by setting up a situation where there will be a lot of mixed blocks -- by writing a lot of small files to the ssd, and then deleting a random selection of the files with TRIM enabled -- and logging before-and-after SMART data for a few days. By repeating this experiment while varying the fraction of files randomly deleted, it might reveal how aggressive the Garbage Collector is.

Not that it matters, but the Garbage Collector and the Wear-Leveler might be combined into a single routine. I can imagine how that might be more efficient. For example, the GC could prefer moving untrimmed pages to the available block that has the smallest erase count.

It could be that a faster sampling rate helps, but when I tried it at 1 sample/s I did not always see all the 37000 pages event being associated with C5=1.

If memory serves, you haven't posted any one-sample-per-second csv that shows a 37000 pages event not associated with C5=1.

Yes, this is a subset of data where POH incremented. I now have more interesting data:
Code:
index    datetime    power_on_hours_count
58419    2021-08-01 02:47:21.513517    11298
60828    2021-08-01 06:09:49.571591    11299
63353    2021-08-01 09:42:03.980420    11300
66881    2021-08-01 14:38:52.917048    11301
67578    2021-08-01 15:38:51.191942    11302
68292    2021-08-01 16:38:51.392299    11303
69006    2021-08-01 17:38:51.061430    11304
69720    2021-08-01 18:38:51.348980    11305
The counter is now increasing once every hour. This isn't due to drive activity. I disabled APM (Advanced Power Management) for SATA devices from the Windows Registry. Apparently Windows forces it enabled on the system drive. This might solve one mystery.

Some sources:

By the "mystery" do you mean the fact that POH was 1:3 instead of 1:1? I'm confused about why 1:3 should be considered a mystery, given that low power mode is desirable (all else being equal) and time spent in low power mode isn't counted by POH. What have you gained by disabling APM?

No, I am not sampling other data in addition to the basic SMART attributes.

If every GB in SLC requires 3GB of native TLC NAND, then the maximum possible amount of SLC data that can be flushed when the drive is completely filled is what can fit in the overprovisioning space, i.e. 35.16 gigabytes (let's say 36) divided by 3 = 12 GB. This is assuming that such entire space can be dedicated to this caching algorithm, which is likely not the case since it will be needed for other purposes too.

More than 12 * 37000 FTL pages events have occurred since I filled the drive. By now any SLC-cached data accumulated during that process will have been flushed.

I think you have it backward. Each 1 GB of SLC NAND does NOT require 3 GB of TLC NAND. Each 3 GB of SLC NAND can be flushed to 1 GB of TLC NAND, because TLC mode stores bits more densely than SLC mode does, and this frees up 2 GB of NAND. Freeing NAND is the point of flushing. For each 3 SLC blocks flushed, the 2 freed blocks become available for 6 more SLC blocks to be flushed, which means ALL blocks written in SLC mode can be flushed. There's reason to believe most of the 120 GB you wrote to fill the ssd was written in SLC mode, because it was written at high speed. All of that 120 GB could eventually be flushed... not just 12 GB of it. The question is, how long does it take to flush 120 GB? Is there a reason to believe it finished, and doesn't still have hours or days of flushing remaining?

Disabling TRIM also just slightly mitigates it. I think my drive is progressively getting worse as observed in the past, with more FTL write events occurring in larger bursts.

My advice is to leave TRIM and power management enabled (except during short term experiments), and use selftests to tame the amplification bug. The only downside appears to be the extra watt of power consumption. (The extra watt might actually benefit the ssd in another way, by maintaining a reasonably constant temperature; switching to/from low power mode causes temperature fluctuations.)

If you're concerned that 30 seconds of pause time between selftests might not allow enough runtime to necessary low priority routines, you could use a pause time longer than 30 seconds and observe whether that tames the bug sufficiently for your purposes. (I'm tempted to increase the pause time, because if the ssd continues at the same rate as the last 12 months, its Remaining Life won't reach 0% for 75 years... more than I expect to need. But I remain curious about whether the current selftest duty cycle is safe over the long term, and perhaps my findings will benefit others.)

As for the self tests, possibly they might decrease the response time of the SSD but I don't think they should be harmful on their own—unless such FTL write spikes are in reality a necessary internal measure for long-term data reliability. No way to know for sure, though.

The self-test (or other user-initiated tasks) could be pausing some sort of internal background analysis which continuously checks whether the data stored on the SSD is accumulating too many errors, triggering a FTL write event if something is out of place. This background analysis would possibly run at low speed for power saving reasons, and therefore allowing 30 seconds between the self-tests might not be enough for the SSD to determine whether something leading to the FTL write bursts needs to be done. This is just a hypothesis, of course.

CrystalDiskMark indicated no performance loss while selftests are running, and in principle I wouldn't expect a performance loss on an ssd because ordinary read and write requests are surely higher priority than the selftest routine and can very quickly interrupt a selftest. (A hard drive, on the other hand, might lose some responsiveness due to extra movement of the read/write heads.)

Thinking about the hypothesis you mentioned... Let's assume the selftests are starving the low priority background analysis... not allowing it enough runtime to keep the ssd healthy, so the ssd accumulates a backlog of unperformed tasks. I think there are three cases to consider:
Case 1: The background analysis process merely pauses during a selftest, and can resume without starting over. In this case, I think my log data would show a lot more write bursts during the pauses between selftests, as the ssd tries to work whenever possible on the unperformed tasks.
Case 2: The analysis must start over each time it's interrupted, but it usually takes less than 30 seconds to identify a block's worth of pages that should be moved to a new block. As in case 1, I think my log data would show a lot more write bursts during the pauses between selftests.
Case 3: The analysis must start over each time it's interrupted, and usually needs more than 30 seconds to identify a block's worth of pages that should be moved to a new block. This is the only case where observations aren't inconsistent with the assumption. So the question is, how plausible is this case? Is there any reason you can think of why the analysis would have been designed to start over and couldn't just resume from the point it was interrupted? That seems like a poor design, if it usually needs more than 30 seconds to find a block to move, because it could often be interrupted by pc reads & writes. And if it typically takes more than 30 seconds, wouldn't it have been designed to maintain an index to speed up its search for pages to move?

You wrote that the background analysis might have been designed to run at slow speed to reduce power consumption, but I don't see why that would necessarily reduce power consumption. It might, but it also might backfire by increasing power consumption. The energy consumed by a calculation is "power x execution time." If T is the factor by which execution time is increased, and f(T) is the factor by which power is reduced, then the energy consumed goes as T/f(T) and it's not obvious to me that there must be at least one possible T>1 for which that ratio is less than 1. Also relevant is the ratio of power during normal power mode to power during low power mode... the designers would expect the longer execution time to postpone the ssd's descent into low power mode, and the correct equation for total power consumption should take that into account too (but I'm not going to).

I observed it on my PC where I didn't run continuous background reading tasks. I noticed it because I often had uptimes of weeks and the difference before/after power cycling was visible.

Good, since that suggests it's unlikely that selftests are to blame for the increase of write bursts when the pc has been running for days.

By the way, sleeping the pc for a moment is another way to power cycle the ssd. There's no need to shutdown the pc to combat the increasing write bursts. (On the other hand, Windows or apps might benefit from an occasional restart, since Windows isn't as robust as Linux.)

No, I only use smartmontools.

Dual booting with operating systems in different drives is much simpler and more reliable, with no risk of either OS preventing in some way the other from booting. This is the main advantage.

Linux can read Windows NTFS partitions, but Windows cannot natively read Linux partitions (Ext4, Btrfs, etc) although third-party drivers exist for this.

CrystalDiskInfo is a drive-monitoring app that can set APM individually for each drive that supports APM:
It has several options:
01hMinimum power consumption with Standby
02h-7FhIntermediate power management levels with Standby
80hMinimum power consumption without Standby
81h-FDhIntermediate power management levels without Standby
FEhMaximum performance
If I understand those options correctly, I think I'd choose 7F for the ssd, which allows high speed transfers and also allows the ssd to conserve power when idle. Alternatively, FE, if that's better at mitigating the F8 write bursts bug.

For the record, I've been running CrystalDiskInfo for the entire nearly two years that this pc and ssd have been in service, without thinking about whether I should experiment with its APM settings. It's set to FE by default, but I never enabled "Auto AAM/APM Adaption" in its Advanced Feature menu, so I presume CrystalDiskInfo never applied the FE setting each time Windows restarted. I assume Windows' default APM setting is 7F, because POH ran slow before I began the selftests regime.

Today I enabled Auto Adaption in CrystalDiskInfo, and I guess I'll eventually learn whether FE affects the F8 bug. (But I won't be able to determine whether it also affects POH unless I halt the selftests regime, because POH advances realtime-ish with the selftests regime.)

Dual booting sounds "simpler" in the sense that it's easier to set up than a virtual machine. But once a virtual machine is set up, having the two OSes run simultaneously can simplify one's labor... no need to shut one OS down in order to run the other.

It would be a world-shaking event if Windows wrote on unmounted partitions.

Only the MX500 was affected by the slow POH issue, and currently I'm using it for Windows.

The POH problem might have been solved above. It seems unrelated with the WAF problem, although perhaps more time will be needed to make sure.

By "world shaking" I assume you mean someone would surely have noticed it before, if Windows writes to an unmounted drive.

If Windows reads an unmounted drive, or if the unmounted ssd is written to by a buggy FTL controller, that would be much less world-shaking. Also, if everyone presumes someone else would have noticed if Windows writes to an unmounted drive, maybe no one has bothered to actually test it. I imagine the number of users who keep a usually-unmounted drive in their computer is relatively small, and I assume most of them have been too busy to test for that behavior, or even think about testing it.

Using smartctl to log an unmounted MX500 for a few days -- host reads, host writes, FTL writes, C5 -- seems like a way to try to gain insight into the cause of the unmounted MX500's rapid POH.

I think you're saying slow POH wasn't an "issue" when the ssd was unmounted, since that's when POH was NOT slow.

Is it important to you that POH advance as realtime? I don't see why slow POH should be considered a problem. All else being equal, I'd prefer the ssd spend its idle time in low power mode, and I see no reason to care that POH doesn't count that time. In fact, it could be useful that POH distinguishes between normal power time and low power time, since it provides a way to measure power efficiency.

If you want to test whether your APM solution that speeds up POH is sacrificing power consumption, I think you could test it indirectly by logging the ssd temperature for a few hours with and without the solution, so you could compare the average temperature. Selftests, for example, raise the ssd's average temperature about 5C. I would assume your background reading task did that too. Selftests and background reading have a purpose -- mitigating the F8 write bursts bug -- but what's the value of using APM to speed up POH if it doesn't mitigate the bug that matters?
 

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
Suffice to say, for longevity the MX500 is inferior to the MX300.

A statement like that, unaccompanied by supporting reasons, should be heavily discounted. What was the basis for your claim? Anecdotes? Statistically significant empirical data?

The MX300 has a worse endurance spec than the MX500: 160 TB for the 525GB MX300, versus 180 TB for the 500GB MX500.

I don't know whether the MX300 has the same write amplification bug that the MX500 has. Assuming Diceman meant the MX300 doesn't have the bug, perhaps Diceman is assuming the user can do nothing to mitigate the bug in the MX500. If so, that's a dubious assumption because the "nearly nonstop selftests" task appears to mitigate the bug very well. During the past 18 months in which I've been running the selftests task on my 500GB MX500, WAF has been 3.35 and Remaining Life has dropped 2%. Here's the relevant data from the beginning and end of the most recent 18 months:
Date​
Total Host Writes (GB)​
S.M.A.R.T.
F7​
S.M.A.R.T.
F8​
Average Block Erase Count​
ΔF7​
ΔF8​
WAF =
1 + ΔF8/ΔF7​
YEARS REMAINING (ESTIMATED)
03/01/2020
6,512
226,982,040
1,417,227,966
118
09/09/2021
9,372
344,166,910
1,692,576,926
149
117,184,870​
275,348,960​
3.35
66

(Every 15 increments of Average Block Erase Count correspond to a 1% decrease of Remaining Life.)

The host wrote about 2.8 TB during those 18 months. Multiply 2.8 TB by 100%/2% and I get an endurance prediction of about 140 TB. That's less than the 180 TB spec, but perhaps the MX300's endurance is less than its 160 TB spec too in the real world.

Regarding buying recommendations, I plan to avoid Crucial SSDs. Also, the MX300 is no longer available according to: https://www.crucial.com/products/ssd/mx300-ssd
 

Diceman_2037

Distinguished
Dec 19, 2011
53
3
18,535
The MX300 has a worse endurance spec than the MX500: 160 TB for the 525GB MX300, versus 180 TB for the 500GB MX500.

On paper. in reality this means nothing, in reality the MX500 will WAF itself to death in half the life span(if not less) of the mx300

I don't know whether the MX300 has the same write amplification bug that the MX500 has

It does not.

My 275 and 525 respectively

ID Attribute Description Threshold Value Worst Data Status
01 Raw Read Error Rate 0 100 100 0 OK: Always passes
05 Reallocated Sector Count 10 100 100 0 OK: Value is normal
09 Power-On Hours Count 0 100 100 40312 OK: Always passes
0C Power Cycle Count 0 100 100 281 OK: Always passes
AB Program Fail Count 0 100 100 0 OK: Always passes
AC Erase Fail Count 0 100 100 0 OK: Always passes
AD Wear Leveling Count 0 74 74 396 OK: Always passes
AE Unexpected Power Loss Count 0 100 100 187 OK: Always passes
B7 SATA Interface Downshift 0 100 100 0 OK: Always passes
B8 Error Correction Count 0 100 100 0 OK: Always passes
BB Reported Uncorrectable Errors 0 100 100 0 OK: Always passes
C2 Enclosure Temperature 0 72 54 46, 18, 28 OK: Always passes
C4 Re-allocation Event Count 0 100 100 0 OK: Always passes
C5 Current Pending Sector Count 0 100 100 0 OK: Always passes
C6 SMART Off-line Scan Uncorrectable Error Count 0 100 100 0 OK: Always passes
C7 SATA/PCIe CRC Error Count 0 100 100 0 OK: Always passes
CA Percentage Of The Rated Lifetime Used 1 74 74 26 OK: Value is normal
CE Write Error Rate 0 100 100 0 OK: Always passes
F6 Total Host Sector Writes 0 100 100 51.89 TB OK: Always passes
F7 Host Program Page Count 0 100 100 3498322410 OK: Always passes
F8 FTL Program Page Count 0 100 100 1384576946 OK: Always passes
B4 Unused Reserve (Spare) NAND Blocks 0 0 0 1261 OK: Always passes
D2 Successful RAIN Recovery Count 0 100 100 0 OK: Always passes

ID Attribute Description Threshold Value Worst Data Status
01 Raw Read Error Rate 0 100 100 0 OK: Always passes
05 Reallocated Sector Count 10 100 100 0 OK: Value is normal
09 Power-On Hours Count 0 100 100 40992 OK: Always passes
0C Power Cycle Count 0 100 100 112 OK: Always passes
AB Program Fail Count 0 100 100 0 OK: Always passes
AC Erase Fail Count 0 100 100 0 OK: Always passes
AD Wear Leveling Count 0 97 97 45 OK: Always passes
AE Unexpected Power Loss Count 0 100 100 66 OK: Always passes
B7 SATA Interface Downshift 0 100 100 0 OK: Always passes
B8 Error Correction Count 0 100 100 0 OK: Always passes
BB Reported Uncorrectable Errors 0 100 100 0 OK: Always passes
C2 Enclosure Temperature 0 72 51 49, 18, 28 OK: Always passes
C4 Re-allocation Event Count 0 100 100 0 OK: Always passes
C5 Current Pending Sector Count 0 100 100 0 OK: Always passes
C6 SMART Off-line Scan Uncorrectable Error Count 0 100 100 0 OK: Always passes
C7 SATA/PCIe CRC Error Count 0 100 100 0 OK: Always passes
CA Percentage Of The Rated Lifetime Used 1 97 97 3 OK: Value is normal
CE Write Error Rate 0 100 100 0 OK: Always passes
F6 Total Host Sector Writes 0 100 100 10.13 TB OK: Always passes
F7 Host Program Page Count 0 100 100 680507201 OK: Always passes
F8 FTL Program Page Count 0 100 100 468021337 OK: Always passes
B4 Unused Reserve (Spare) NAND Blocks 0 0 0 1936 OK: Always passes
D2 Successful RAIN Recovery Count 0 100 100 0 OK: Always passes


vs my MX500 1 and 2TB respectively

ID Attribute Description Threshold Value Worst Data Status
01 Raw Read Error Rate 0 100 100 0 OK: Always passes
05 Reallocated Sector Count 10 100 100 0 OK: Value is normal
09 Power-On Hours Count 0 100 100 32035 OK: Always passes
0C Power Cycle Count 0 100 100 58 OK: Always passes
AB Program Fail Count 0 100 100 0 OK: Always passes
AC Erase Fail Count 0 100 100 0 OK: Always passes
AD Wear Leveling Count 0 77 77 345 OK: Always passes
AE Unexpected Power Loss Count 0 100 100 41 OK: Always passes
B4 Unused Reserve (Spare) NAND Blocks 0 0 0 43 OK: Always passes
B7 SATA Interface Downshift 0 100 100 0 OK: Always passes
B8 Error Correction Count 0 100 100 0 OK: Always passes
BB Reported Uncorrectable Errors 0 100 100 0 OK: Always passes
C2 Enclosure Temperature 0 64 39 36 OK: Always passes
C4 Re-allocation Event Count 0 100 100 0 OK: Always passes
C5 Current Pending Sector Count 0 100 100 0 OK: Always passes
C6 SMART Off-line Scan Uncorrectable Error Count 0 100 100 0 OK: Always passes
C7 SATA/PCIe CRC Error Count 0 100 100 0 OK: Always passes
CA Percentage Of The Rated Lifetime Used 1 77 77 23 OK: Value is normal
CE Write Error Rate 0 100 100 0 OK: Always passes
D2 Successful RAIN Recovery Count 0 100 100 0 OK: Always passes
F6 Total Host Sector Writes 0 100 100 23.52 TB OK: Always passes
F7 Host Program Page Count 0 100 100 809192510 OK: Always passes
F8 FTL Program Page Count 0 100 100 9630333345 OK: Always passes


ID Attribute Description Threshold Value Worst Data Status
01 Raw Read Error Rate 0 100 100 0 OK: Always passes
05 Reallocated Sector Count 10 100 100 0 OK: Value is normal
09 Power-On Hours Count 0 100 100 22077 OK: Always passes
0C Power Cycle Count 0 100 100 36 OK: Always passes
AB Program Fail Count 0 100 100 0 OK: Always passes
AC Erase Fail Count 0 100 100 0 OK: Always passes
AD Wear Leveling Count 0 94 94 91 OK: Always passes
AE Unexpected Power Loss Count 0 100 100 18 OK: Always passes
B4 Unused Reserve (Spare) NAND Blocks 0 0 0 95 OK: Always passes
B7 SATA Interface Downshift 0 100 100 0 OK: Always passes
B8 Error Correction Count 0 100 100 0 OK: Always passes
BB Reported Uncorrectable Errors 0 100 100 0 OK: Always passes
C2 Enclosure Temperature 0 62 34 38 OK: Always passes
C4 Re-allocation Event Count 0 100 100 0 OK: Always passes
C5 Current Pending Sector Count 0 100 100 0 OK: Always passes
C6 SMART Off-line Scan Uncorrectable Error Count 0 100 100 0 OK: Always passes
C7 SATA/PCIe CRC Error Count 0 100 100 0 OK: Always passes
CA Percentage Of The Rated Lifetime Used 1 94 94 6 OK: Value is normal
CE Write Error Rate 0 100 100 0 OK: Always passes
D2 Successful RAIN Recovery Count 0 100 100 0 OK: Always passes
F6 Total Host Sector Writes 0 100 100 20.23 TB OK: Always passes
F7 Host Program Page Count 0 100 100 695626050 OK: Always passes
F8 FTL Program Page Count 0 100 100 4391474703 OK: Always passes
 
Last edited:

Lucretia19

Reputable
Feb 5, 2020
192
14
5,245
On paper. in reality this means nothing, in reality the MX500 will WAF itself to death in half the life span(if not less) of the mx300

It does not [have the same WAF bug as the MX500].

My 275GB MX300:
AD Wear Leveling Count 396
F6 Total Host Sector Writes 51.89 TB
F7 Host Program Page Count 3498322410
F8 FTL Program Page Count 1384576946
My 525GB MX300:
AD Wear Leveling Count 45
F6 Total Host Sector Writes 10.13 TB
F7 Host Program Page Count 680507201
F8 FTL Program Page Count 468021337
My 1TB MX500:
AD Wear Leveling Count 345
F6 Total Host Sector Writes 23.52 TB
F7 Host Program Page Count 809192510
F8 FTL Program Page Count 9630333345
My 2TB MX500:
AD Wear Leveling Count 91
F6 Total Host Sector Writes 20.23 TB
F7 Host Program Page Count 695626050
F8 FTL Program Page Count 4391474703

Thanks for providing more data. Below are my calculations of your four drives' WAFs and endurance predictions, using the following formulas:
WAF = 1 + F8/F7
Endurance Prediction = F6 x 1500/AD (Note: This is an oversimplification.)
MX300 275GB: WAF = 1.40 Endurance Prediction = 197 TB
MX300 525GB: WAF = 1.69 Endurance Prediction = 338 TB
MX500 1TB: WAF = 12.90 Endurance Prediction = 102 TB
MX500 2TB: WAF = 7.31 Endurance Prediction = 333 TB

I agree that your MX300 drives appear not to have the WAF bug.

Crucial claims the endurance is 360 TB for the 1TB MX500 and 700 TB for the 2TB MX500, so those two drives do indeed appear headed for premature death, unless you mitigate the problem. If your MX500 drives' high WAFs are caused by the bug, they would probably benefit a lot by running the selftests task (assuming you don't mind the increased power consumption that the selftests would cause by preventing the drives from entering low power mode, approximately one extra watt per drive). Because of the option to run a selftests task, it's NOT inevitable that MX500 drives die radically premature deaths.

Because the MX500 spends a lot of time in low power mode in typical use, your MX500 drives' Power-On Hours appear much higher than what I would expect: 32,035 for the 1TB drive and 22,077 for the 2TB drive. Also, their Power Cycle Counts are pretty low: 58 and 36. How have you been using the drives? How long have they been in service? Have you ever updated their firmware?