Question Crucial MX500 500GB sata ssd Remaining Life decreasing fast despite few bytes being written

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Jan 6, 2021
6
0
10
0
I analysed in ProcMon what processes have lots of ssd writes, for me they are Firefox, Discord and wmiprvse.exe. I don't think i can change their way of writing logs or caches and tbh i don't want to do that.

I also have 12GB pagefile due to my low 8GB RAM amount (i can't add it reasonably cause my free RAM slots got toasted and they don't recognise RAM sticks). Moving it to HDD would lead to significant reduce in performance, and hdds aren't exactly reliable too, i think even more unreliable.

So i guess i stuck with what its now. I launched selftesting again, now with 900 seconds of tests and 25 seconds of pause between them. I'll see what WAF will be with these parameters
here is the last update
datef7f8wafdelta f7delta f8delta wafnotes
7.01​
394 578 354​
2 943 521 015​
8.459915​
started selftesting
8.01​
395 887 798​
2 949 986 622​
8.451572​
1 309 444​
6 465 607​
5.937674​
9.01​
398 680 065​
2 957 139 291​
8.417324​
2 792 267​
7 152 669​
3.561599​
10.01​
400 055 198​
2 962 740 964​
8.40583​
1 375 133​
5 601 673​
5.07355​
10.01​
400 839 906​
2 966 725 089​
8.401272​
784 708​
3 984 125​
6.077207​
last measure before changing test period to 660 seconds
11.01​
401 773 067​
2 971 735 280​
8.396552​
933 161​
5 010 191​
6.369053​
day after changed test period to 660 seconds
12.01​
403 701 933​
2 981 646 172​
8.385761​
1 928 866​
9 910 892​
6.138196​
last measure before secure erase, life percentage dropped to 71%
13.01​
413 753 845​
2 991 460 117​
8.230048​
10 051 912​
9 813 945​
1.976326​
day after performed secure erase via ATA sanitize command in Parted Magic, no self tests running
14.01​
415 567 163​
3 005 180 615​
8.231516​
1 813 318​
13 720 498​
8.566515​
15.01​
416 538 351​
3 017 053 153​
8.243158​
971 188​
11 872 538​
13.22476​
16.01​
418 055 289​
3 027 454 391​
8.241756​
1 516 938​
10 401 238​
7.856732​
17.01​
419 329 203​
3 040 918 218​
8.251864​
1 273 914​
13 463 827​
11.56887​
18.01​
420 776 559​
3 054 904 989​
8.26016​
1 447 356​
13 986 771​
10.66367​
19.01​
421 350 800​
3 060 572 692​
8.263716​
574 241​
5 667 703​
10.8699​
last measure before resuming selftesting, selftestfseconds=900
 
Last edited:
Jan 15, 2021
4
0
10
0
I corrected the table, now it's the way you asked me. I also corrected the WAF values, they were really wrong, I was careless when it came to calculations, I believe they are now with the correct values.

These values are always copying at 7:30 am, for example, I started to check from 01/14/2021, on 01/15, at 7:30 am I got the data that are in the table for the day 15, so it was with the following days.

On the most recent day, 01/18, I copied the data at 00:30 am, but I made the correction in the table, with the values copied at 7:30 am today, as I do with the following days.

My question about how much time your pc was powered on during the 3 months was NOT about how much time you spent using it. Even when you aren't using the pc, Windows does a lot of writing to the system drive (C:) while the pc is powered on and not sleeping or hibernating. So, my question was about how much time your pc was powered on and either in use or idle, and also whether your pc is set to stay awake while it's idle, or go to sleep, or hibernate.
I'm sorry if I didn't answer your questions correctly, as my native language is not American, so much so that I'm using an online translator to help me in the conversation.

If I understood your question well, you wanted to know how long my PC was on, and during that time, how many hours it was in use, and if it was idle, you also wanted to know how many hours it was also, is that it? On a normal day of use, I don't use it for more than 8 hours, the idle part is included, half of that time is in the morning, the rest at the end of the day, around 6pm, sometimes this total hours extends between 10 ~ 12 hours of use, but are exceptions. It is not connected 24 hours, because at the moment it is impossible, it is only connected during the hours of use, then it is completely disconnected.

But even between 8 and 10 hours, I think he is sometimes between 2 and 3 hours idle. About him being configured to stay awake when he is idle or not, in the power options of windows, I configured him to never suspend. The monitor is configured to turn off in 10 minutes in case of idle. Hibernation is disabled, it is something I always do when installing windows.

Basically, this is my style of use during these three months, but it wasn't every day that I turned on the PC, sometimes I spent 2 to 3 days a week without turning it on.
 

Lucretia19

Prominent
Feb 5, 2020
143
11
595
2
If I understood your question well, you wanted to know how long my PC was on, and during that time, how many hours it was in use, and if it was idle, you also wanted to know how many hours it was also, is that it? On a normal day of use, I don't use it for more than 8 hours, the idle part is included,

[-snip-]

About him being configured to stay awake when he is idle or not, in the power options of windows, I configured him to never suspend.
I should have expressed my question more simply: On average, how many hours per days is the computer powered on and not asleep? (Windows writes a lot while the pc is on, unless the pc is asleep.) If I'm correctly interpreting your answer, the average is approximately 8 hours per day on and not asleep, and about 16 hours per day off.

For comparison, my pc averages nearly 24 hours per day on and not asleep. So, all else being equal, I would expect Windows background services to write about 3 times more on my pc than on yours. The amounts I'm referring to are a large fraction of ΔF7.

Over the 6 days or rows from 1.17.2021 to 1.22.2021, your ΔF7 and ΔF8 values are excellent.

I think the sum ΔF7+ΔF8 is the most useful indicator of the rate that ssd life is being consumed. Your sum has averaged about 166,000 over the last 6 days. If you continue at this excellent rate, when your ssd eventually dies it will be for some other reason, like a lightning strike or meteorite strike.

Whenever you log ssd SMART data, you may want to include the Average Block Erase Count too. It provides a direct measure of ssd Remaining Life. Every 15 increments of ABEC correspond to 1% of life used.

I also log a few other values. Including Power Cycle Count, because there may be a correlation between Daily WAF and the number of days since the most recent power cycle. I haven't tried to do a rigorous analysis of my logs to verify whether that correlation is real, but every few days I sleep my pc for a few seconds because a pc sleep causes the ssd to power off. If I ever do try to verify the correlation, I'll have a lot of data available to analyze.
 

Lucretia19

Prominent
Feb 5, 2020
143
11
595
2
Firefox may have been improved by a January update so that it writes much less. Three observations support this conclusion:

1. Every night I run a mirror backup process for my internal hard drives, to ensure my external drive has the latest versions of the hard drive files. The mirror process deletes changed versions of files from the external drive and leaves them in my Recycle Bin, so each morning I glance at them before emptying the Recycle Bin. Beginning about a month ago, I noticed that are fewer files in the Recycle Bin each morning, and all (or most) of the files no longer there are Firefox files. (Note: My Firefox profile and cache are on a hard drive.) Their absence from the Recycle Bin means Firefox stopped changing them (unless Firefox is fooling the mirror backup software, FreeFileSync, by no longer updating file timestamps when writing and leaving their size unchanged).

2. The ssd SMART data that StrikerFX posted shows his writing by pc to ssd significantly decreased: the ΔF7 values on and after January 17 are much lower than the ΔF7 values before January 17. StrikerFX wrote 6 days ago that he uses Firefox a lot: "My use during the 3 months was very simple [...]. Most of it I spent just browsing, more than 90% using firefox [...]."

3. I ran Procmon for awhile, configured to display all write events to any drive when the path includes "firefox" or "mozilla." During that time, I browsed the web using Firefox and Procmon displayed no write events.

An alternative theory is that Mozilla changed the way Firefox stores data. For example, it might now use a Windows service to store Firefox data in a Windows database that apps can access. However, I think this theory is probably false. The ΔF7 of my ssd has remained fairly low, averaging around 20 to 30 kbytes/second, which implies Firefox didn't start writing to ssd the data that it used to write to hard drive.
 

Lucretia19

Prominent
Feb 5, 2020
143
11
595
2
First Annual Report on the success of the ssd selftests.

For the last year, since 3/01/2020, my pc has been running the selftests .bat file (with a duty cycle of 19.5 minutes of every 20 minutes, except for four days in late March 2020 when I experimented with a duty cycle of 880 seconds of each 900 seconds). The pc runs 24 hours per day, with rare exceptions. Here's the result as of 3/01/2021 (one year later):
Date​
Total Host Writes (GB)​
S.M.A.R.T.
F7​
S.M.A.R.T.
F8​
Power On Hours​
Average Block Erase Count
Power Cycle Count​
ΔF7​
ΔF8​
WAF from 3/01/2020 to 3/01/2021
= 1 +
ΔF8/ΔF7​
03/01/2020​
6,512​
226,982,040
1,417,227,966
1,223​
118
99​
03/01/2021​
8,757​
317,090,210
1,607,822,549
9,781​
141
157​
90,108,170
190,594,583
3.12

The decrease of ssd Remaining Life is derived from the increase of Average Block Erase Count, because each 15 increments of ABEC correspond to 1% of life consumed. ABEC increased by 23 (141-118), and 23/15 = 1.533, which means 1.533% of the ssd life was consumed during the last year. This corresponds to a lifetime of about 65 years, which is very pleasing.

However, this very long expected life shouldn't be attributed solely to the selftests. My pc wrote only about 2.2 TBytes (8,757GB - 6,512GB) to the ssd during the last year. Most users write to their ssds at a much higher rate.

Here's the result over the last 6-ish months, from 8/21/2020 to 3/01/2021, during which I reduced the rate of host pc writing even further (by moving more log files to a hard drive):
Date​
Total Host Writes (GB)​
S.M.A.R.T.
F7​
S.M.A.R.T.
F8​
Power On Hours​
Average Block Erase Count
Power Cycle Count​
ΔF7​
ΔF8​
WAF from 8/21/2020 to 3/01/2021
= 1 +
ΔF8/ΔF7​
08/21/2020​
7,938​
283,519,047
1,505,901,288
5,283​
131
111​
03/01/2021​
8,757​
317,090,210
1,607,822,549
9,781​
141
157​
33,571,163
101,921,261
4.04

Only 0.67% of ssd life (141-131 / 15) was consumed between 8/21/2020 and 3/01/2021, which corresponds to an ssd lifetime of more than 75 years. The pc wrote only 819 GB to the ssd during this period. This rate is much lower than during 3/01/2020 to 8/21/2020 when 1,426 GB were written. The decreased rate of writing corresponds to an increase of WAF (4.04 during 8/21/2020 to 3/01/2021, compared to 2.56 during 3/01/2020 to 8/21/2020). This suggests 819 GB per 6-ish months is less than the optimal rate of writing if the goal is to minimize WAF. (Minimizing WAF is NOT the goal, though.)

A 65 year ssd lifetime is much longer than anyone should need. (A post-apocalyptic immortal is the obvious exception.) So there's no need for anyone to reduce host pc writing to their ssd as much as I have. Unless the pc is writing to the ssd at an abnormally high rate, simply running the selftests .bat should suffice for most users of Crucial ssds.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
I used to have a huge WAF on my Crucial MX500 purchased in 2019. I did find it to be strongly correlated with SSD activity (if the SSD is busy, WAF is lower) and loosely correlated with TRIM activation (without TRIM enabled, WAF is lower).

It appears that this SSD has a faulty or buggy power management, and FTL writes increase as a result of it. A related side effect is that power on hours would proceed slower than real-time. It's been over 2 years since I installed this SSD in my 24/7 always-on PC, and power on hours are still at about 11250. Keeping the SSD "alive" with a low read load would also mitigate this issue.

However, I recently upgraded to an Intel B560 platform and now all of those problems appear to have gone. Power On Hours now proceed as fast as realtime, and the calculated differential WAF (calculated as "1 + deltaF8/deltaF7") is now very close to 1.

So, you should try checking out if the same behavior is observed on a modern PC. I previously had an Intel H77-based motherboard (Intel Core 3rd gen) and the issue occurred regardless of the OS.
 
Last edited:

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
It looks like I may have written the previous post too soon. The observations were with the SSD used as a secondary (unused) storage unit in my new computer configuration. After I tried installing the OS (Windows 10) on top of it, power-on-hours started advancing slowly again. However, the short-term WAF still appears to be just above 1.

I'm monitoring SMART attributes every minute. Relevant data can be plotted like this:

 
Last edited:

Lucretia19

Prominent
Feb 5, 2020
143
11
595
2
I used to have a huge WAF on my Crucial MX500 purchased in 2019. I did find it to be strongly correlated with SSD activity (if the SSD is busy, WAF is lower) and loosely correlated with TRIM activation (without TRIM enabled, WAF is lower).

It appears that this SSD has a faulty or buggy power management, and FTL writes increase as a result of it. A related side effect is that power on hours would proceed slower than real-time. It's been over 2 years since I installed this SSD in my 24/7 always-on PC, and power on hours are still at about 11250. Keeping the SSD "alive" with a low read load would also mitigate this issue.
I don't see the reason to believe the ssd's power management is the cause of the excess FTL writes. Power On Hours (POH) is a measure of the amount of time that the MX500 spends in normal power mode. It doesn't count the time spent in low power mode. It's normal behavior for the MX500's POH to advance more slowly than real-time, because the ssd puts itself in a low power mode whenever it's not busy... and most of the time it's not busy (not servicing a read or write request, nor rewriting sectors to manage wear-leveling). (In some applications, though, an ssd might be kept very busy, such as an ssd used to record streams of surveillance video.)

I'm unsure what you meant by "keeping the ssd alive with a low read load." I'll assume you meant you added an ssd-reading process to the host pc, and that this reading process ran as often as the pc had time to run it. That sounds similar to my solution "keep the ssd busy with continual selftests (orchestrated by the host pc)." Can you be more specific about what your "low read load" was doing?

My interpretation of the correlation between continual selftests and greatly increased POH is that increased POH is just an unsurprising side effect of keeping the ssd busy... it doesn't spend time in low power mode. That correlation doesn't imply power management is the cause of the excessive NAND writing problem. It's not the only correlation. Keeping the ssd busy consumes the runtime available to the ssd firmware, which starves all lower priority ssd firmware routines of runtime. The reduced runtime of lower priority routines can't be directly observed in SMART data, but it makes sense that the reduction of their runtime must happen when the ssd is kept busy running higher priority routines. That's a correlation. If I'm right, one of those lower priority routines is the culprit, and when it gets starved of runtime the bug doesn't manifest. (I pause the selftests for 30 seconds every 20 minutes, instead of nonstop selftests, because I don't want to totally starve the lower priority processes, since they also do essential work.)

I presume the firmware routine that manages wear-leveling is one of those lower priority routines. Normal behavior of a wear-leveling routine involves writing NAND pages, but I see no reason why a power management routine would include any NAND writing. This is why I believe the correct explanation is more likely to be a bug in the ssd's wear-leveling routine than a bug in the ssd's power management. (Also, I think at least part of a power management routine would need to be high priority -- not starvable -- because it needs to immediately switch the ssd from low power mode to normal power mode when the ssd receives a read or write request from the host pc.)

Another correlation is with the C5 attribute in the SMART data. C5 is the number of "Current Pending Sectors," which refers to sectors that failed to read properly, which might need to be rewritten and remapped depending on whether they can be read okay later. The correlation is that C5 switches from 0 to 1 at the start of each buggy burst of FTL NAND Writes, and switches back to 0 at the end of each burst. Trying to identify the cause-and-effect implied by this correlation isn't easy, and I can only speculate. My speculation is that the NAND memory that Crucial "buys" from their parent company Micron isn't quite fast enough to keep up with the fast data rate that Crucial chose in order to offer a competitive product, so that occasionally there's a "soft" read error, which sets an internal flag (related to C5), then some buggy hardware or buggy firmware fails to reset the flag in a timely manner, and the firmware interprets the "stuck" flag during other sector reads as indicating those other sectors are suspect too, which quickly adds up to a lot of suspected sectors that the firmware thinks should be rewritten to good sectors... hence a burst of writing later, when the rewriting routine is given some runtime. Assuming this rewriting is done by a routine that's lower priority than reading and selftests, it would be prevented by keeping the ssd busy. (This rewriting isn't necessarily part of the same routine that manages wear-leveling, but if I were designing the firmware I think I'd design a single routine to manage all rewriting.)

However, I recently upgraded to an Intel B560 platform and now all of those problems appear to have gone. Power On Hours now proceed as fast as realtime, and the calculated differential WAF (calculated as "1 + deltaF8/deltaF7") is now very close to 1.

So, you should try checking out if the same behavior is observed on a modern PC. I previously had an Intel H77-based motherboard (Intel Core 3rd gen) and the issue occurred regardless of the OS.
I consider my pc to be reasonably "modern." It was built in August 2019 using an MSI X470 Gaming Plus motherboard and a Ryzen 3200G.

It looks like I may have written the previous post too soon. The observations were with the SSD used as a secondary (unused) storage unit in my new computer configuration. After I tried installing the OS (Windows 10) on top of it, power-on-hours started advancing slowly again. However, the short-term WAF still appears to be just above 1.
I don't understand why your ssd's POH indicated the ssd avoided low power mode while the ssd was unused secondary storage. I would expect the opposite behavior -- spending nearly all its time in low power mode -- if it wasn't being used. Are you certain that POH advanced like realtime when the ssd was "on" but unused?

Did you erase the ssd before you transferred it to your new B560 computer? If so, did you use the ssd's Secure Erase routine?

For how many days (hours?) after the Windows 10 installation has WAF remained close to 1?

If F7 is large, the effect of the bug on F8 might be relatively small compared to the large F7's normal effect on F8. In other words, differential WAF would be expected to remain low when delta F7 is high. If I'm correctly interpreting your upper graph, the data covers about 8 hours, and your host pc is writing about 100,000 NAND pages per hour to the ssd. That's approximately 3 GB/hour. That may be large enough that you might not notice the bug's effect on F8 and WAF. For comparison, my pc has averaged 0.129 GB/hour of writing to the ssd -- according to HWiNFO, which shows 36 KB/second averaged over the last 547 hours -- which is a MUCH lower rate than your pc appears to be writing.

As an experiment, can you set your pc to write much less than 3GB/hour to the ssd, for at least a few days? (If the ssd isn't the only drive in your pc, perhaps you could redirect most of the writing to a different drive.)

Your lower graph appears to show two bursts of NAND writing. (The taller burst might have been a long burst or multiple bursts. On my ssd, the duration of each burst appears to be a multiple of about 5 seconds, with 5 seconds being the most common duration.) Do you not interpret those two bursts as indications of the buggy behavior?

I hope you will continue to log ssd stats, and that you will post stats accumulated over months. Stats posted as graphs are useful, but I also like text that I can quickly paste into a spreadsheet and rigorously analyze.

Can you also log the C5 attribute, to detect when it briefly changes to 1? My experience is that it correlates perfectly with the buggy NAND write bursts... 1 while each burst is in progress. (There was a time when I logged ssd SMART data every second, which established the perfect correlation. I also established that the write bursts occur only during the pauses between selftests, and I now log C5 every second only during the pauses.) If you log C5 at a high rate of logging, that should provide a precise measure of the portion of F8 writing caused by the bug. Given that each burst lasts a multiple of 5 seconds, logging C5 more frequently than once every 5 seconds should allow you to detect each write burst, and calculate their durations and number of NAND pages written.

Below is a summary of my C5 (write burst) data recorded every second during the pauses between selftests, over the last 24 hours. Most of the pauses have no write bursts. If you compare the pauses before 7/26 15:03 with the pauses after 15:03, you'll see that far fewer of the pauses after 15:03 had write bursts... 9 bursts in 17 pauses before 15:03, compared to only 4 bursts in 57 pauses after 15:03. The explanation is that I put the pc to sleep for about 10 seconds between 14:24 and 14:44 because the ssd had been on for 15 days without being power-cycled, and I've observed that write bursts tend to be more frequent when the ssd has been on for many days without being power-cycled.

Date, Time, PauseDuration, BurstStartOffset, BurstDuration
07/26/2021,9:04:31.12, 26, 9, 5
07/26/2021,9:24:39.10, 25, 15, 6
07/26/2021,9:44:57.13, 24, , none
07/26/2021,10:04:45.17, 24, 21, 4
07/26/2021,10:24:52.15, 29, , none
07/26/2021,10:44:20.15, 29, 1, 5
07/26/2021,11:04:23.14, 28, 4, 4
07/26/2021,11:24:55.19, 27, , none
07/26/2021,11:44:24.20, 26, 2, 5
07/26/2021,12:04:23.11, 25, 1, 4
07/26/2021,12:24:31.18, 24, 2, 10
07/26/2021,12:44:52.13, 23, , none
07/26/2021,13:04:20.13, 29, 1, 5
07/26/2021,13:24:53.18, 29, , none
07/26/2021,13:44:54.18, 28, , none
07/26/2021,14:04:55.17, 27, , none
07/26/2021,14:24:56.17, 26, , none
07/26/2021,15:03:17.14, 29, 4, 4
07/26/2021,15:23:48.12, 29, , none
07/26/2021,15:43:49.15, 28, , none
07/26/2021,16:03:50.12, 27, , none
07/26/2021,16:23:51.18, 26, , none
07/26/2021,16:43:52.14, 25, , none
07/26/2021,17:03:52.14, 24, , none
07/26/2021,17:23:47.18, 23, , none
07/26/2021,17:43:47.11, 29, , none
07/26/2021,18:03:48.15, 28, , none
07/26/2021,18:23:18.17, 28, 3, 5
07/26/2021,18:43:50.11, 27, , none
07/26/2021,19:03:51.15, 26, , none
07/26/2021,19:23:52.17, 25, , none
07/26/2021,19:43:53.14, 24, , none
07/26/2021,20:03:47.14, 23, , none
07/26/2021,20:23:47.12, 29, , none
07/26/2021,20:43:48.15, 29, , none
07/26/2021,21:03:49.15, 28, , none
07/26/2021,21:23:50.12, 27, , none
07/26/2021,21:43:51.16, 26, , none
07/26/2021,22:03:52.11, 25, , none
07/26/2021,22:23:53.16, 24, , none
07/26/2021,22:43:47.11, 23, , none
07/26/2021,23:03:47.11, 29, , none
07/26/2021,23:23:48.18, 29, , none
07/26/2021,23:43:49.16, 28, , none
07/27/2021,0:03:50.19, 27, , none
07/27/2021,0:23:51.11, 26, , none
07/27/2021,0:43:52.15, 25, , none
07/27/2021,1:03:53.18, 24, , none
07/27/2021,1:23:47.17, 23, , none
07/27/2021,1:43:47.13, 29, , none
07/27/2021,2:03:49.11, 29, , none
07/27/2021,2:23:49.13, 28, , none
07/27/2021,2:43:50.14, 27, , none
07/27/2021,3:03:51.13, 26, , none
07/27/2021,3:23:52.12, 25, , none
07/27/2021,3:43:53.12, 24, , none
07/27/2021,4:03:47.14, 23, , none
07/27/2021,4:23:47.16, 29, , none
07/27/2021,4:43:48.19, 29, , none
07/27/2021,5:03:49.12, 28, , none
07/27/2021,5:23:50.17, 27, , none
07/27/2021,5:43:51.12, 26, , none
07/27/2021,6:03:52.12, 25, , none
07/27/2021,6:23:24.15, 24, 5, 5
07/27/2021,6:43:47.11, 23, , none
07/27/2021,7:03:47.13, 29, , none
07/27/2021,7:23:48.13, 29, , none
07/27/2021,7:43:49.11, 28, , none
07/27/2021,8:03:17.17, 27, 1, 5
07/27/2021,8:23:51.17, 26, , none
07/27/2021,8:43:52.17, 25, , none
07/27/2021,9:03:53.19, 24, , none
07/27/2021,9:23:47.12, 23, , none
07/27/2021,9:43:47.18, 29, , none


It's convenient that sleeping the pc for a moment causes the ssd to power-cycle, so it's not necessary to shutdown the pc. The only inconvenience is that power-cycling the ssd interrupts the selftest in progress, so I also restart my selftests .bat task after I wake the pc.
 
Last edited:

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
I don't see the reason to believe the ssd's power management is the cause of the excess FTL writes. Power On Hours (POH) is a measure of the amount of time that the MX500 spends in normal power mode. It doesn't count the time spent in low power mode. It's normal behavior for the MX500's POH to advance more slowly than real-time, because the ssd puts itself in a low power mode whenever it's not busy... and most of the time it's not busy (not servicing a read or write request, nor rewriting sectors to manage wear-leveling). (In some applications, an ssd might be kept very busy, though, such as an ssd used to record streams of surveillance video.)
It's not the normal behavior of any other SSD I've owned so far. If the SSD is connected to SATA power, POH should advance regardless of its sleep state. Keeping the SSD busy enough so that POH advance at a normal rate (presumably, never making it sleep) appears to greatly reduce or remove the extreme WAF effect.

I'm unsure what you meant by "keeping the ssd alive with a low read load." I'll assume you meant you added an ssd-reading process to the host pc, and that this reading process ran as often as the pc had time to run it. That sounds similar to my solution "keep the ssd busy with continual selftests (orchestrated by the host pc)." Can you be more specific about what your "low read load" was doing?
I wrote 'alive' but probably a better term would have been 'awake'. Yes, in practice at some point I set up a low-priority, low-impact constant background read task using the program called Flexible I/O tester, but I don't remember what settings I used at the time.

My interpretation of the correlation between continual selftests and greatly increased POH is that increased POH is just an unsurprising side effect of keeping the ssd busy. [....]

I presume the firmware routine that manages wear-leveling is one of those lower priority routines. Normal behavior of a wear-leveling routine involves writing NAND pages, but I see no reason why a power management routine would include any NAND writing. This is why I believe the correct explanation is more likely to be a bug in the ssd's wear-leveling routine than a bug in the ssd's power management.
Yes, this could be a valid explanation. It's possible that the SSD is aggressively shuffling the data to avoid performance degradation on static data as seen on the first-gen Samsung 840.

However, it's also likely that such performance degradation occurs quicker than it should if the SSD is sleeping even when it's not supposed to (e.g. DEVSLP option explicitly disabled in the SATA controller). This is not even taking into account whether some additional bug is making the problem worse when the SSD frequently cycles through such sleeping state.

Another correlation is with the C5 attribute in the SMART data. C5 is the number of "Current Pending Sectors," which refers to sectors that failed to read properly, which might need to be rewritten and remapped depending on whether they can be read okay later. [...]
I did notice that too. Pending sectors appeared more when write amplification was high. Possibly I didn't sample SMART attributes frequently enough to catch them all. This is from a few days of data from last year, again sampled every minute.



[...]

I don't understand why your ssd's POH indicated the ssd avoided low power mode while the ssd was unused secondary storage. I would expect the opposite behavior -- spending nearly all its time in low power mode -- if it wasn't being used. Are you certain that POH advanced like realtime when the ssd was "on" but unused?
I'm not sure either. I thought that was the effect of the new computer and SATA controller. I sampled power-on-hours data manually. Although somewhat imprecise, the data seemed promising.
After this, I decided to reinstall the OS on this SSD.



Did you erase the ssd before you transferred it to your new B560 computer? If so, did you use the ssd's Secure Erase routine?
No secure erase, but the previously existing data (an unused Linux installation with a copy-on-write file system [btrfs] that likely filled the main partition with data) was removed and the new system cloned over from another SSD using Macrium Reflect. Free space has been trimmed.

For how many days (hours?) after the Windows 10 installation has WAF remained close to 1?
I started logging data almost as soon as the OS was installed and ready for use. You can see the results below. Unfortunately the cumulative WAF after less than a day since the OS was reinstalled has now increased to about 2 due to the WAF spikes issue. Otherwise, it would be about 1.2 on the short term.



[...]
As an experiment, can you set your pc to write much less than 3GB/hour to the ssd, for at least a few days? (If the ssd isn't the only drive in the pc, perhaps you could redirect most of the writing to a different drive.)
Unfortunately it's not possible unless I transfer again the system to a different SSD.

Your lower graph appears to show two bursts of NAND writing. (The taller burst might have been a long burst or multiple bursts. On my ssd, the duration of each burst appears to be a multiple of about 5 seconds, with 5 seconds being the most common duration.) Do you not interpret those two bursts as indications of the buggy behavior?
Yes, it's definitely anomalous. Most user-initiated writes have very low amplification, close to the minimum of 1. Higher values are unrelated internal writes.

In 2019 when I set up a more sophisticated logging and graphing system I noticed that the internal writes were about 1 GiB large. This is a graph I made at the time and posted on the Crucial support forum (which got discontinued soon after):



I hope you will continue to log ssd stats, and that you will post stats accumulated over months. Stats posted as graphs are useful, but I also like text that I can quickly paste into a spreadsheet and rigorously analyze.
I was hoping of not having to log and monitor SSD stats in detail again, but I'll see what I can do. Probably I will need to anyway, since SSD wear is relatively fast if the WAF is exceedingly high.

Can you also log the C5 attribute, to detect when it briefly changes to 1? My experience is that it correlates perfectly with the buggy NAND write bursts... 1 while each burst is in progress. (There was a time when I logged ssd SMART data every second, which established the perfect correlation. I also established that the write bursts occur only during the pauses between selftests, and I now log C5 every second only during the pauses.) If you log C5 at a high rate of logging, that should provide a precise measure of the portion of F8 writing caused by the bug. Given that each burst lasts a multiple of 5 seconds, logging C5 more frequently than once every 5 seconds should allow you to detect each write burst, and calculate their durations and number of NAND pages written.
I am logging all SMART attributes, but so far only at 1 minute intervals. This can be changed, of course.

This is a sample of the logged data. It's just the output of smartctl turned into a new CSV line every minute:

Code:
2021-07-27T18:52:04.889350,0,0,11273,162,0,0,393,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59130389624,1029675508,3989336267,
2021-07-27T18:53:04.943823,0,0,11273,162,0,0,393,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59130401376,1029675756,3989336349,
2021-07-27T18:54:04.993306,0,0,11273,162,0,0,393,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59130405776,1029675863,3989336421,
2021-07-27T18:55:05.036226,0,0,11273,162,0,0,393,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59130413240,1029676031,3989336498,
2021-07-27T18:56:05.076777,0,0,11273,162,0,0,393,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59130418344,1029676149,3989336547,
Here's a summary of my C5 (write burst) data recorded every second during the pauses between selftests, over the last 24 hours. Most of the pauses have no write bursts. If you compare the pauses before 7/26 15:03 with the pauses after 15:03, you'll see that far fewer of the pauses after 15:03 had write bursts... 9 bursts in 17 pauses before 15:03, and only 4 bursts in 57 pauses after 15:03. I put the pc to sleep for about 10 seconds between 14:24 and 14:44, because the ssd had been on for 15 days without being power-cycled, and I've observed that write bursts tend to be more frequent when the ssd has been on for many days without being power-cycled.
I think I recall seeing a correlation with how infrequent the SSD was power cycled, but I might not be able to keep the PC on for very prolonged periods without interruptions at this time, unfortunately.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
@Lucretia19
By the way, I found some data I logged in 2019 and Python code used to graph it.




Below are the events associated with the vertical dashed lines that I took note of.

Code:
        ['2019-03-30 01:36', 'Disabled TRIM (fsutil behavior set DisableDeleteNotify 1) and disable drive optimization on SSD'],
        ['2019-03-30 10:45', 'Pre-conditioned free space to remove non-dirty areas'],
        ['2019-04-02 17:03', 'Deleted big dummy data, but did not reenable TRIM yet'],
        ['2019-04-02 18:43', 'Re-enabled TRIM (fsutil behavior set DisableDeleteNotify 0), drive optimization on SSD, and emptied trash bin'],
        ['2019-04-03 15:16', 'File system defragmentation started'],
        ['2019-04-11 12:27', 'Changed SATA port and controller (ASMedia)'],
        ['2019-04-11 13:00', 'Cycled PC on/off, disabled Intel SATA'],
        ['2019-04-12 03:47', 'Power cycled SSD'],
        ['2019-04-12 12:08', 'Changed SATA port and controller back to Intel native'],
        ['2019-04-19 09:30', 'Power cycled SSD (Hibernation)']
Here is the data if you're interested (link will expire in 30 days): https://ufile.io/afsy417w


EDIT: by the way, every smartctl query "costs" 3 FTL program page counts, so doing it every second may have a measurable impact on the calculated WAF if host write activity is low.
 
Last edited:

Lucretia19

Prominent
Feb 5, 2020
143
11
595
2
It's not the normal behavior of any other SSD I've owned so far. If the SSD is connected to SATA power, POH should advance regardless of its sleep state.
-snip-
It's well documented that the MX500 POH doesn't count time spent in low power mode. I don't believe Crucial acknowledges this is a bug. Perhaps its POH does that by design, but I suspect it was a bug they didn't want to acknowledge and fix.

Another MX500 bug is the Total Sectors Read attribute, which rolls back to zero every time it reaches 4G (2^32) sectors, which is about 2 TBytes. Fortunately, Total Sectors Read isn't relevant to ssd lifespan. But if it were of interest, one could track the number of times N that it rolled back to zero, and add N x 2^32 sectors to the value. (One would need to start tracking the rollovers soon after the ssd is put into service.) According to my log, my ssd has recently been taking about 3.5 months to read 2 TB.

I wrote 'alive' but probably a better term would have been 'awake'. Yes, in practice at some point I set up a low-priority, low-impact constant background read task using the program called Flexible I/O tester, but I don't remember what settings I used at the time.
I think selftests have less impact on pc performance than running a low priority read routine on the pc. Inside the ssd, selftests are lower priority than servicing the host pc's read or write requests, so read and write requests pause a selftest and are serviced immediately; they won't be queued behind read requests that were issued by a host pc read routine. CrystaIDiskMark measured no degradation of read/write performance while a selftest was running. (In fact, CrystalDiskMark showed the ssd running a selftest slightly exceeded Crucial's published speed specs of a brand new MX500. But this speed gain was small, and might not be real.) Also, selftests don't consume sata power or pc power or pc cpu cycles.

Selftests aren't free, though. By not entering low power mode, I think it costs about a watt, and the ssd temperature runs about 5C warmer. And I can't guarantee that the pauses between selftests allow enough runtime to the lower priority ssd routines to keep the ssd healthy. But I've seen no indications of any problems caused by the selftests regime, and the large fraction of pauses with no write bursts suggests there isn't a growing backlog of undone work that the lower priority routines aren't being allowed to complete.

Yes, this could be a valid explanation. It's possible that the SSD is aggressively shuffling the data to avoid performance degradation on static data as seen on the first-gen Samsung 840.
Yes... but I don't think it explains the C5 behavior. I don't see why a wear-leveling algorithm would need to set C5 to 1 to accomplish its task. Still, I agree that wear-leveling is a plausible theory.

There's another rewriting task that the MX500 occasionally needs to do. When its write cache gets full, it writes to NAND inefficiently in fast SLC mode for the sake of speed, and eventually that data needs to be rewritten in TLC mode to efficiently use the NAND storage space. I presume this rewriting algorithm is in a low priority routine, and perhaps a bug in this algorithm causes the write bursts.

However, it's also likely that such performance degradation occurs quicker than it should if the SSD is sleeping even when it's not supposed to (e.g. DEVSLP option explicitly disabled in the SATA controller). This is not even taking into account whether some additional bug is making the problem worse when the SSD frequently cycles through such sleeping state.
You've made me google "DEVSLP" and my understanding of it is only rudimentary. A review of the MX500 says it supports DEVSLP. However, I haven't gotten the impression that power state management might be the cause of any issues, because I see no systematic correlation there. For example, only a portion of the pauses between selftests have FTL write bursts; why not all of the pauses if buggy power management is triggering the bursts?

That data contains a mystery. POH Difference should never exceed Hours Delta, unless it only appears to exceed due to a rounding error. But in the row where POH Difference is 26 hours, Hours Delta is only 24.98 hours, and the 1.02 excess looks like more than a rounding error.

It looks even less like a rounding error if we combine two consecutive rows: POH Difference 17+26 (43 hours) exceeds Hours Delta 16.17+24.98 (41.15 hours) by 1.85 hours.

Was the pc traveling at a significant fraction of the speed of light?

No secure erase, but the previously existing data (an unused Linux installation with a copy-on-write file system [btrfs] that likely filled the main partition with data) was removed and the new system cloned over from another SSD using Macrium Reflect. Free space has been trimmed.
How was the previously existing data "removed?"

I started logging data almost as soon as the OS was installed and ready for use. You can see the results below. Unfortunately the cumulative WAF after less than a day since the OS was reinstalled has now increased to about 2 due to the WAF spikes issue. Otherwise, it would be about 1.2 on the short term.
-snip-
The growth of cumulative WAF to 2 since Windows reinstalled (about 18 hours ago?) implies the recent short term WAF (the last 3-ish hours) has grown to more than 2.

Please let us know if it grows worse when the pc has been on for days.

In 2019 when I set up a more sophisticated logging and graphing system I noticed that the internal writes were about 1 GiB large.
-snip-
That matches my measurement: Each 5 second burst writes about 1 GB. (A burst that lasts a multiple of 5 seconds writes that multiple of 1 GB, of course.)

It's pretty clear that the ssd is copying data during the bursts, not just writing. 5 seconds is the amount of time it would take an MX500 to read 1 GB and write 1 GB if it's performing at its maximum spec speeds for reading and writing. Copying is another way of saying rewriting, which is what happens during wear leveling, during bad sector remapping, and during SLC-to-TLC conversion.

I think I recall seeing a correlation with how infrequent the SSD was power cycled, but I might not be able to keep the PC on for very prolonged periods without interruptions at this time, unfortunately.
I've written about the apparent correlation with "days since ssd power-cycled" once or twice in this thread. I still haven't yet tried to rigorously analyze my accumulating data to prove the correlation is real and measure how strong it is. Do you recall whether you saw it written about by someone else, and not just here by me?
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
It's well documented that the MX500 POH doesn't count time spent in low power mode. I don't believe Crucial acknowledges this is a bug. Perhaps its POH does that by design, but I suspect it was a bug they didn't want to acknowledge and fix. [...]
Could be. There have been also some oddities with firmware updates for the MX500. Apparently earlier units like the one I have couldn't install the latest one available (M3CR033). For those, M3CR023 is the last one released. I wonder if all these small but annoying issues are also related to these differences.

I think selftests have less impact on pc performance than running a low priority read routine on the pc. Inside the ssd, selftests are lower priority than servicing the host pc's read or write requests, so read and write requests pause a selftest and are serviced immediately; they won't be queued behind read requests that were issued by a host pc read routine. [...]
I haven't specifically tested this. With the program I mentioned in any case it's possible to set a read bandwidth limit, as well as block size and so on. I/O and process priority can be set too. See:

https://fio.readthedocs.io/en/latest/fio_doc.html#i-o-rate

Selftests aren't free, though. By not entering low power mode, I think it costs about a watt, and the ssd temperature runs about 5C warmer. [...]
With DevSleep the SSD should consume a few tens of milliwatts (depending on factors like nand capacity, controller etc), but it can be 0.5-1.0 watts or more without power management, so that sounds about right. Anandtech tests this aspect:

https://www.anandtech.com/show/12263/the-crucial-mx500-500gb-review/8

Yes... but I don't think it explains the C5 behavior. I don't see why a wear-leveling algorithm would need to set C5 to 1 to accomplish its task. Still, I agree that wear-leveling is a plausible theory.
A possibility is that the SSD controller is continuously monitoring the read error rate on static data and if it determines it's too high it copies it to a new location, keeping it fresh and fast to read. When this is about to happen, the C5 attribute temporarily increases.

There's another rewriting task that the MX500 occasionally needs to do. When its write cache gets full, it writes to NAND inefficiently in fast SLC mode for the sake of speed, and eventually that data needs to be rewritten in TLC mode to efficiently use the NAND storage space. I presume this rewriting algorithm is in a low priority routine, and perhaps a bug in this algorithm causes the write bursts.
I don't know. I've seen it happening almost continuously over many weeks of logging 24/7 without doing that many writes.

You've made me google "DEVSLP" and my understanding of it is only rudimentary. A review of the MX500 says it supports DEVSLP. However, I haven't gotten the impression that power state management might be the cause of any issues, because I see no systematic correlation there. For example, only a portion of the pauses between selftests have FTL write bursts; why not all of the pauses if buggy power management is triggering the bursts?
Because it might possibly be a kind of poorly reproducible issue that depends on the type of internal drive activity that is occurring during the pauses. Perhaps the controller occasionally decides to sleep at the "wrong" moment, and when it wakes up again it finds itself in an inconsistent state which then tries to repair.

It's just a possible hypothesis that I rank lower than the one where this is really just the drive trying to keep the data fresh to avoid a Samsung 840-type fiasco, though.

That data contains a mystery. [...]
As I mentioned, the data was manually sampled in that I just occasionally (when I remembered to) executed smartctl and took note of the power-on-hours and timestamp of when I did. It wasn't a very accurate test, but still accurate enough over a couple days to make me think that the attribute was apparently advancing as normally intended.

How was the previously existing data "removed?"
Macrium Reflect erases the existing partition table on the destination drive, copies partitions from the source drive (only the data, not also the free space) and then executes trim on the destination drive.

The growth of cumulative WAF to 2 since Windows reinstalled (about 18 hours ago?) implies the recent short term WAF (the last 3-ish hours) has grown to more than 2.

Please let us know if it grows worse when the pc has been on for days.
A few hours ago I started logging data at 1 second interval, but I think this shorter interval is negatively affecting the WAF since every time smartctl is invoked, the FTL program page count increases by 3 units. So the WAF since I started this new log file is already about 2.



That matches my measurement: Each 5 second burst writes about 1 GB. (A burst that lasts a multiple of 5 seconds writes that multiple of 1 GB, of course.)

It's pretty clear that the ssd is copying data during the bursts, not just writing. 5 seconds is the amount of time it would take an MX500 to read 1 GB and write 1 GB if it's performing at its maximum spec speeds for reading and writing. Copying is another way of saying rewriting, which is what happens during wear leveling, during bad sector remapping, and during SLC-to-TLC conversion.
I never thought of this. I'll try checking out if I will spot such events with the faster logging I set up earlier.

I've written about the apparent correlation with "days since ssd power-cycled" once or twice in this thread. I still haven't yet tried to rigorously analyze my accumulating data to prove the correlation is real and measure how strong it is. Do you recall whether you saw it written about by someone else, and not just here by me?
I meant that I probably saw that in my data in 2019. If you check post #160, I posted some there.

I only power cycled the SSD twice during that period, but each event seemed associated with a temporary reduction of the extreme WAF issue.

Code:
        ['2019-04-12 03:47', 'Power cycled SSD'],
        ['2019-04-19 09:30', 'Power cycled SSD (Hibernation)']

EDIT: better yet, here is another graph with power-cycle-count (thick black line) superimposed on the data. It looks like I power cycled it more than I cared to manually note:

 
Last edited:
With DevSleep the SSD should consume a few tens of milliwatts (depending on factors like nand capacity, controller etc), but it can be 0.5-1.0 watts or more without power management, so that sounds about right. Anandtech tests this aspect:

https://www.anandtech.com/show/12263/the-crucial-mx500-500gb-review/8
DevSleep shouldn't depend on any chips other than the switching component. If this feature is implemented properly, then it simply switches off the power to the entire PCB. That is, there is a single load switch or MOSFET sitting on the interface, and its control/gate pin is driven by the DevSleep pin. It should be almost as efficient as a physical on/off switch.

However, as Anandtech says, "our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state". That's because the Power Disable pin (SATA pin #3) is hardwired to the PSU.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
I'm curious...has anyone had, or known anyone that has, their MX500 die from too many write cycles, in semi-normal consumer use.
No, but I guess most people do not leave their computer on 24/7 or otherwise long enough to notice this possible issue. I lost 26% endurance in 2.5 years with just about 27.5 TB written, and over the past year I haven't even used this SSD much . The 500GB model is supposed to have an endurance of 180 TB.

https://www.crucial.com/content/dam/crucial/ssd-products/mx500/flyer/crucial-mx500-ssd-productflyer-letter-en.pdf

DevSleep shouldn't depend on any chips other than the switching component. If this feature is implemented properly, then it simply switches off the power to the entire PCB. That is, there is a single load switch or MOSFET sitting on the interface, and its control/gate pin is driven by the DevSleep pin. It should be almost as efficient as a physical on/off switch.

However, as Anandtech says, "our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state". That's because the Power Disable pin (SATA pin #3) is hardwired to the PSU.
The BIOS/UEFI on my motherboard has a toggle for enabling or disabling DEVSLP under SATA controller options, similar to the ones in the image below, if that's what you meant. However I don't know if it's just a "hint" that the SSD may or may not follow.

 
Last edited:

USAFRet

Titan
Moderator
Mar 16, 2013
146,003
8,998
175,340
22,736
No, but I guess most people do not leave their computer on 24/7 or otherwise long enough to notice this possible issue.
While I don't have an MX500 in this system, I DO leave my system on 24/7.

And have for years.

27.5TB in 2.5 years, with a warranty of 180TBW....
Approx 11TB per year.

You'll run past that warranty number in 2036 at your current rate.
Problem?


I asked the question, because we see many many worries about the TBW number, and actual use.
I've asked that question many times. To date, I've not hear a single report of a drive dying like that.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
[...] 27.5TB in 2.5 years, with a warranty of 180TBW....
Approx 11TB per year.

You'll run past that warranty number in 2036 at your current rate.
Problem?
The problem is 26% wear with 27.5 TB written. This means that 100% wear will be reached at 105.8 TB, slightly above half the rated endurance, and indications were that the write amplification issue gets worse with time.

A 2012 Samsung 840 250GB I still have and use for mid-term storage has 30% wear with 60.9 TB written.
 

USAFRet

Titan
Moderator
Mar 16, 2013
146,003
8,998
175,340
22,736
The problem is 26% wear with 27.5 TB written. This means that 100% wear will be reached at 105.8 TB, slightly above half the rated endurance, and indications were that the write amplification issue gets worse with time.

A 2012 Samsung 840 250GB I still have and use for mid-term storage has 30% wear with 60.9 TB written.
My current OS drive:
500GB 850 EVO, 42k POH (running basically 24/7), 68TBW, CrystalDiskInfo and Samsung Magician both report 100%.

I'm still waiting for someone, anyone, to show evidence that their drive has actually died from too many write cycles.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
My current OS drive:
500GB 850 EVO, 42k POH (running basically 24/7), 68TBW, CrystalDiskInfo and Samsung Magician both report 100%.
Look for Attribute 177 "Wear Leveling Count". The "Current" normalized value indicates the residual life%. The "Raw" value indicates the average number of block erase operations (program/erase cycles) performed so far.

I'm still waiting for someone, anyone, to show evidence that their drive has actually died from too many write cycles.
Most users replace their SSDs or entire computers before that happens.

To clarify, I'm not really concerned about wearing it up and I'm not limiting write activity because of this. I here for the most part only reported my experience with the unusually high wear rate of the MX500 compared to all other SSDs I owned, after I wrongly thought that my new PC solved the issues I had with it, when I used it more heavily until about a year ago (it's really more like 25 TB in 1.5 years).

I do find however somewhat irritating that I probably won't be able to reliably keep using this SSD much longer than the warranty period if the wear rate continues at this level or gets even worse. It feels like planned obsolescence.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
Can you also log the C5 attribute, to detect when it briefly changes to 1? My experience is that it correlates perfectly with the buggy NAND write bursts... 1 while each burst is in progress. (There was a time when I logged ssd SMART data every second, which established the perfect correlation. I also established that the write bursts occur only during the pauses between selftests, and I now log C5 every second only during the pauses.) If you log C5 at a high rate of logging, that should provide a precise measure of the portion of F8 writing caused by the bug. Given that each burst lasts a multiple of 5 seconds, logging C5 more frequently than once every 5 seconds should allow you to detect each write burst, and calculate their durations and number of NAND pages written.
Luckily (or not?) it didn't take too long.

Here is the data logged so far at 1 sample/s. "Pending ECC counts" can be seen where some of the WAF spikes occur, but not always.



More in detail:



Even more in detail (Event 1):



Even more in detail (Event 2):



In this last image the total program page count (FTL+Host) increased by about 74000 pages during the high WAF event, or about 2GB if the page size is 29400 bytes (calculated by dividing cumulative_host_sectors_written (bytes) by host_program_page_count)

SMART data before and after that event (CSV format):

Code:
2021-07-28T07:35:35.499413,0,0,11276,162,0,0,394,28,38,1,0,0,46 (Min/Max 0/63),0,1,0,146,26,0,0,59244451902,1031602643,3991339111
2021-07-28T07:37:35.411217,0,0,11276,162,0,0,394,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59244469270,1031603041,3991412757
 
Last edited:

Lucretia19

Prominent
Feb 5, 2020
143
11
595
2
Luckily (or not?) it didn't take too long.

Here is the data logged so far at 1 sample/s. "Pending ECC counts" can be seen where some of the WAF spikes occur, but not always.
-snip-
When I correlated C5 (Current Pending Sectors) with the FTL write bursts (over a year ago), I paid attention to two attributes: C5 and F8. In particular, F8 increased rapidly while and only while C5=1.

I think large deltaF8 is a simpler and better indicator of when an FTL write burst is occurring than a WAF spike is.

In this last image the total program page count (FTL+Host) increased by about 74000 pages during the high WAF event, or about 2GB if the page size is 29400 bytes (calculated by dividing cumulative_host_sectors_written (bytes) by host_program_page_count)
That 74000-ish burst is an example where the multiplier is 2x. My casual (non-rigorous) observations of my data show that the larger the multiplier, the less common are the corresponding bursts, so 74000 (= 2 x 37000) bursts happen less often than 37000, more often than 111000 (= 3 x 37000), etc.

SMART data before and after that event (CSV format):
Code:
2021-07-28T07:35:35.499413,0,0,11276,162,0,0,394,28,38,1,0,0,46 (Min/Max 0/63),0,1,0,146,26,0,0,59244451902,1031602643,3991339111
2021-07-28T07:37:35.411217,0,0,11276,162,0,0,394,28,38,1,0,0,45 (Min/Max 0/63),0,0,0,146,26,0,0,59244469270,1031603041,3991412757
Those two rows are about 2 minutes apart. But the bursts are much shorter duration than that. How about showing (csv) rows containing the values of C5 and F8, at 1 second intervals from before C5 becomes 1 to after C5 returns to 0? When I find time, hopefully in the next day or two, I'll do that too.
 

solidstatebrain

Distinguished
Oct 26, 2012
26
0
18,540
1
When I correlated C5 (Current Pending Sectors) with the FTL write bursts (over a year ago), I paid attention to two attributes: C5 and F8. In particular, F8 increased rapidly while and only while C5=1.

I think large deltaF8 is a simpler and better indicator of when an FTL write burst is occurring than a WAF spike is.
Delta F8 on its own is indeed sufficient as an indicator for the FTL write bursts, but when I first created the code for plotting the data I was interested in assessing the impact on write amplification this issue had. At the moment these spikes only occasionally occur, but over time and no power cycling they can get very frequent.

That 74000-ish burst is an example where the multiplier is 2x. My casual (non-rigorous) observations of my data show that the larger the multiplier, the less common are the corresponding bursts, so 74000 (= 2 x 37000) bursts happen less often than 37000, more often than 111000 (= 3 x 37000), etc.
Yep, this seemed also my case. On the graph from data logged 2019 that I posted in comment #162, where I converted program pages to MiB written, it can be seen that 1 GiB spikes were much more frequent than 2 GiB ones, and so on.

Those two rows are about 2 minutes apart. But the bursts are much shorter duration than that. How about showing (csv) rows containing the values of C5 and F8, at 1 second intervals from before C5 becomes 1 to after C5 returns to 0? When I find time, hopefully in the next day or two, I'll do that too.
I didn't want to flood the forum with CSV data, but here is a selection according to such criteria (tab-separated values):

Code:
index	datetime	current_pending_ecc_count	ftl_program_page_count
30354	2021-07-28 05:36:16.713010	0	3991199513
30355	2021-07-28 05:36:17.750698	0	3991199523
30356	2021-07-28 05:36:18.782164	0	3991199526
30357	2021-07-28 05:36:19.819926	0	3991199529
30358	2021-07-28 05:36:20.860853	0	3991199532
30359	2021-07-28 05:36:21.904748	0	3991199535
30360	2021-07-28 05:36:22.942370	0	3991199544
30361	2021-07-28 05:36:23.988706	0	3991199547
30362	2021-07-28 05:36:25.032582	0	3991199550
30363	2021-07-28 05:36:26.064069	1	3991199873
30364	2021-07-28 05:36:27.095279	1	3991199876
30365	2021-07-28 05:36:28.128490	1	3991201238
30366	2021-07-28 05:36:29.166417	1	3991201577
30367	2021-07-28 05:36:30.204974	1	3991202252
30368	2021-07-28 05:36:31.250364	1	3991202591
30369	2021-07-28 05:36:32.296960	1	3991202930
30370	2021-07-28 05:36:33.328545	1	3991205311
30371	2021-07-28 05:36:34.366672	1	3991205650
30372	2021-07-28 05:36:35.413855	1	3991205989
30373	2021-07-28 05:36:36.447333	1	3991205992
30374	2021-07-28 05:36:37.485689	1	3991206331
30375	2021-07-28 05:36:38.519172	1	3991207342
30376	2021-07-28 05:36:39.565802	1	3991207681
30377	2021-07-28 05:36:40.591831	1	3991207684
30378	2021-07-28 05:36:41.629410	1	3991208031
30379	2021-07-28 05:36:42.676042	1	3991209062
30380	2021-07-28 05:36:43.708317	1	3991209737
30381	2021-07-28 05:36:44.750570	1	3991209740
30382	2021-07-28 05:36:45.795200	1	3991210079
30383	2021-07-28 05:36:46.829482	1	3991210754
30384	2021-07-28 05:36:47.871596	1	3991212127
30385	2021-07-28 05:36:48.909218	1	3991212130
30386	2021-07-28 05:36:49.952992	1	3991212469
30387	2021-07-28 05:36:50.984644	1	3991212808
30388	2021-07-28 05:36:52.031274	1	3991213147
30389	2021-07-28 05:36:53.073114	1	3991214510
30390	2021-07-28 05:36:54.111177	1	3991214849
30391	2021-07-28 05:36:55.146532	1	3991215188
30392	2021-07-28 05:36:56.193162	1	3991215527
30393	2021-07-28 05:36:57.235781	1	3991215866
30394	2021-07-28 05:36:58.282143	1	3991216549
30395	2021-07-28 05:36:59.327060	1	3991216888
30396	2021-07-28 05:37:00.362528	1	3991217227
30397	2021-07-28 05:37:01.399356	1	3991217566
30398	2021-07-28 05:37:02.431727	1	3991217569
30399	2021-07-28 05:37:03.470035	1	3991218936
30400	2021-07-28 05:37:04.516009	1	3991219275
30401	2021-07-28 05:37:05.562638	1	3991219614
30402	2021-07-28 05:37:06.594323	1	3991219617
30403	2021-07-28 05:37:07.626293	1	3991219956
30404	2021-07-28 05:37:08.662254	1	3991220975
30405	2021-07-28 05:37:09.706251	1	3991221314
30406	2021-07-28 05:37:10.738636	1	3991221317
30407	2021-07-28 05:37:11.785266	1	3991221656
30408	2021-07-28 05:37:12.822806	1	3991221995
30409	2021-07-28 05:37:13.854164	1	3991223342
30410	2021-07-28 05:37:14.932203	1	3991223345
30411	2021-07-28 05:37:15.971137	1	3991223684
30412	2021-07-28 05:37:17.026225	1	3991224023
30413	2021-07-28 05:37:18.072855	1	3991224706
30414	2021-07-28 05:37:19.119485	1	3991225381
30415	2021-07-28 05:37:20.152139	1	3991225384
30416	2021-07-28 05:37:21.183726	1	3991225723
30417	2021-07-28 05:37:22.215146	1	3991226062
30418	2021-07-28 05:37:23.257323	1	3991228104
30419	2021-07-28 05:37:24.294118	1	3991228787
30420	2021-07-28 05:37:25.340380	1	3991229126
30421	2021-07-28 05:37:26.387010	1	3991229465
30422	2021-07-28 05:37:27.419939	1	3991229804
30423	2021-07-28 05:37:28.465167	1	3991229807
30424	2021-07-28 05:37:29.505070	1	3991230818
30425	2021-07-28 05:37:30.550111	1	3991231157
30426	2021-07-28 05:37:31.596740	1	3991231496
30427	2021-07-28 05:37:32.643370	1	3991231499
30428	2021-07-28 05:37:33.679291	1	3991231838
30429	2021-07-28 05:37:34.725363	1	3991232877
30430	2021-07-28 05:37:35.772008	1	3991233552
30431	2021-07-28 05:37:36.836741	1	3991233891
30432	2021-07-28 05:37:37.872310	1	3991233894
30433	2021-07-28 05:37:38.903964	1	3991234905
30434	2021-07-28 05:37:39.935689	1	3991235244
30435	2021-07-28 05:37:40.982519	1	3991235887
30436	2021-07-28 05:37:42.014910	1	3991235890
30437	2021-07-28 05:37:43.048391	1	3991236213
30438	2021-07-28 05:37:44.081557	0	3991236874
30439	2021-07-28 05:37:45.125627	0	3991236877
30440	2021-07-28 05:37:46.171450	0	3991236880
30441	2021-07-28 05:37:47.215562	0	3991236883
30442	2021-07-28 05:37:48.246773	0	3991236886
30443	2021-07-28 05:37:49.280267	0	3991236896
30444	2021-07-28 05:37:50.326896	0	3991236899
30445	2021-07-28 05:37:51.365150	0	3991236902
30446	2021-07-28 05:37:52.407658	0	3991236905
30447	2021-07-28 05:37:53.448174	0	3991236908
31541	2021-07-28 05:56:50.370501	0	3991241315
31542	2021-07-28 05:56:51.410265	0	3991241318
31543	2021-07-28 05:56:52.453145	0	3991241321
31544	2021-07-28 05:56:53.493034	0	3991241324
31545	2021-07-28 05:56:54.514709	0	3991241327
31546	2021-07-28 05:56:55.546138	0	3991241330
31547	2021-07-28 05:56:56.579619	0	3991241333
31548	2021-07-28 05:56:57.614337	0	3991241336
31549	2021-07-28 05:56:58.650101	0	3991241346
31550	2021-07-28 05:56:59.690154	1	3991241669
31551	2021-07-28 05:57:00.729696	1	3991241672
31552	2021-07-28 05:57:01.767009	1	3991242347
31553	2021-07-28 05:57:02.806559	1	3991243374
31554	2021-07-28 05:57:03.845636	1	3991245393
31555	2021-07-28 05:57:04.920762	1	3991246748
31556	2021-07-28 05:57:05.964489	1	3991247087
31557	2021-07-28 05:57:07.002150	1	3991247426
31558	2021-07-28 05:57:08.048808	1	3991247765
31559	2021-07-28 05:57:09.079997	1	3991248796
31560	2021-07-28 05:57:10.111420	1	3991249815
31561	2021-07-28 05:57:11.158049	1	3991250154
31562	2021-07-28 05:57:12.200141	1	3991250493
31563	2021-07-28 05:57:13.234608	1	3991251504
31564	2021-07-28 05:57:14.265811	1	3991252199
31565	2021-07-28 05:57:15.297412	1	3991252538
31566	2021-07-28 05:57:16.341459	1	3991252877
31567	2021-07-28 05:57:17.379121	1	3991253216
31568	2021-07-28 05:57:18.425750	1	3991253219
31569	2021-07-28 05:57:19.461262	1	3991254238
31570	2021-07-28 05:57:20.494748	1	3991254577
31571	2021-07-28 05:57:21.525961	1	3991254916
31572	2021-07-28 05:57:22.560452	1	3991254919
31573	2021-07-28 05:57:23.636039	1	3991255930
31574	2021-07-28 05:57:24.682506	1	3991257277
31575	2021-07-28 05:57:25.726599	1	3991257616
31576	2021-07-28 05:57:26.765019	1	3991257963
31577	2021-07-28 05:57:27.811678	1	3991257966
31578	2021-07-28 05:57:28.859674	1	3991258305
31579	2021-07-28 05:57:29.902202	1	3991259337
31580	2021-07-28 05:57:30.948508	1	3991259676
31581	2021-07-28 05:57:31.985029	1	3991259679
31582	2021-07-28 05:57:33.023901	1	3991260690
31583	2021-07-28 05:57:34.061686	1	3991262381
31584	2021-07-28 05:57:35.093114	1	3991263392
31585	2021-07-28 05:57:36.139748	1	3991263395
31586	2021-07-28 05:57:37.168710	1	3991263734
31587	2021-07-28 05:57:38.205285	1	3991264073
31588	2021-07-28 05:57:39.243705	1	3991264412
31589	2021-07-28 05:57:40.275122	1	3991265109
31590	2021-07-28 05:57:41.310748	1	3991265448
31591	2021-07-28 05:57:42.357377	1	3991265787
31592	2021-07-28 05:57:43.404006	1	3991266126
31593	2021-07-28 05:57:44.478850	1	3991267145
31594	2021-07-28 05:57:45.514643	1	3991267148
31595	2021-07-28 05:57:46.548131	1	3991268159
31596	2021-07-28 05:57:47.581316	1	3991268498
31597	2021-07-28 05:57:48.609692	1	3991268837
31598	2021-07-28 05:57:49.640925	1	3991269532
31599	2021-07-28 05:57:50.672707	1	3991269871
31600	2021-07-28 05:57:51.700364	1	3991270210
31601	2021-07-28 05:57:52.723575	1	3991270557
31602	2021-07-28 05:57:53.761762	1	3991270896
31603	2021-07-28 05:57:54.792981	1	3991271907
31604	2021-07-28 05:57:55.826211	1	3991272246
31605	2021-07-28 05:57:56.872109	1	3991272585
31606	2021-07-28 05:57:57.911027	1	3991272588
31607	2021-07-28 05:57:58.951974	1	3991272927
31608	2021-07-28 05:57:59.988256	1	3991273957
31609	2021-07-28 05:58:01.021468	1	3991274296
31610	2021-07-28 05:58:02.060646	1	3991274299
31611	2021-07-28 05:58:03.094476	1	3991276326
31612	2021-07-28 05:58:04.137203	1	3991276665
31613	2021-07-28 05:58:05.177376	1	3991277964
31614	2021-07-28 05:58:06.224006	1	3991277967
31615	2021-07-28 05:58:07.264482	1	3991278290
31616	2021-07-28 05:58:08.295694	1	3991278616
31617	2021-07-28 05:58:09.333625	0	3991278634
31618	2021-07-28 05:58:10.364791	0	3991278643
31619	2021-07-28 05:58:11.411420	0	3991278646
31620	2021-07-28 05:58:12.452916	0	3991278649
31621	2021-07-28 05:58:13.484134	0	3991278652
31622	2021-07-28 05:58:14.520392	0	3991278655
31623	2021-07-28 05:58:15.552705	0	3991278658
31624	2021-07-28 05:58:16.592483	0	3991278661
31625	2021-07-28 05:58:17.625091	0	3991278664
31626	2021-07-28 05:58:18.658313	0	3991278667
33991	2021-07-28 06:39:16.884787	0	3991288197
33992	2021-07-28 06:39:17.933812	0	3991288200
33993	2021-07-28 06:39:18.977504	0	3991288203
33994	2021-07-28 06:39:20.025098	0	3991288213
33995	2021-07-28 06:39:21.061368	0	3991288216
33996	2021-07-28 06:39:22.101784	0	3991288219
33997	2021-07-28 06:39:23.148638	0	3991288222
33998	2021-07-28 06:39:24.194886	0	3991288225
33999	2021-07-28 06:39:25.244909	0	3991288234
34000	2021-07-28 06:39:26.282524	1	3991288557
34001	2021-07-28 06:39:27.322737	1	3991288896
34002	2021-07-28 06:39:28.404788	1	3991289235
34003	2021-07-28 06:39:29.447788	1	3991289238
34004	2021-07-28 06:39:30.490734	1	3991290275
34005	2021-07-28 06:39:31.526642	1	3991290614
34006	2021-07-28 06:39:32.565999	1	3991291625
34007	2021-07-28 06:39:33.611635	1	3991291628
34008	2021-07-28 06:39:34.649848	1	3991291967
34009	2021-07-28 06:39:35.695728	1	3991292986
34010	2021-07-28 06:39:36.741377	1	3991293661
34011	2021-07-28 06:39:37.777685	1	3991293664
34012	2021-07-28 06:39:38.825008	1	3991294003
34013	2021-07-28 06:39:39.874980	1	3991295014
34014	2021-07-28 06:39:40.918987	1	3991295353
34015	2021-07-28 06:39:41.964097	1	3991295356
34016	2021-07-28 06:39:43.004277	1	3991296031
34017	2021-07-28 06:39:44.049841	1	3991296370
34018	2021-07-28 06:39:45.094998	1	3991297389
34019	2021-07-28 06:39:46.140309	1	3991297728
34020	2021-07-28 06:39:47.190321	1	3991297731
34021	2021-07-28 06:39:48.235440	1	3991298070
34022	2021-07-28 06:39:49.279900	1	3991298745
34023	2021-07-28 06:39:50.326855	1	3991299781
34024	2021-07-28 06:39:51.377885	1	3991299784
34025	2021-07-28 06:39:52.421447	1	3991300803
34026	2021-07-28 06:39:53.452562	1	3991301478
34027	2021-07-28 06:39:54.501405	1	3991302153
34028	2021-07-28 06:39:55.532634	1	3991303164
34029	2021-07-28 06:39:56.579264	1	3991303503
34030	2021-07-28 06:39:57.613097	1	3991303842
34031	2021-07-28 06:39:58.653848	1	3991304181
34032	2021-07-28 06:39:59.700388	1	3991304184
34033	2021-07-28 06:40:00.746835	1	3991305224
34034	2021-07-28 06:40:01.778090	1	3991306907
34035	2021-07-28 06:40:02.824720	1	3991307582
34036	2021-07-28 06:40:04.050717	1	3991309609
34037	2021-07-28 06:40:05.096720	1	3991311292
34038	2021-07-28 06:40:06.125798	1	3991311631
34039	2021-07-28 06:40:07.171653	1	3991312306
34040	2021-07-28 06:40:08.218282	1	3991312645
34041	2021-07-28 06:40:09.250901	1	3991312648
34042	2021-07-28 06:40:10.295277	1	3991313692
34043	2021-07-28 06:40:11.326633	1	3991314031
34044	2021-07-28 06:40:12.370855	1	3991314706
34045	2021-07-28 06:40:13.402538	1	3991314709
34046	2021-07-28 06:40:14.433996	1	3991315048
34047	2021-07-28 06:40:15.468515	1	3991315740
34048	2021-07-28 06:40:16.505853	1	3991316415
34049	2021-07-28 06:40:17.566108	1	3991316754
34050	2021-07-28 06:40:18.610406	1	3991316757
34051	2021-07-28 06:40:19.657035	1	3991317440
34052	2021-07-28 06:40:20.688150	1	3991319142
34053	2021-07-28 06:40:21.725775	1	3991320489
34054	2021-07-28 06:40:22.756992	1	3991321164
34055	2021-07-28 06:40:23.803622	1	3991321511
34056	2021-07-28 06:40:24.839597	1	3991322042
34057	2021-07-28 06:40:25.876043	1	3991323213
34058	2021-07-28 06:40:26.912224	1	3991323216
34059	2021-07-28 06:40:27.954360	1	3991323555
34060	2021-07-28 06:40:28.990317	1	3991323894
34061	2021-07-28 06:40:30.035124	1	3991324233
34062	2021-07-28 06:40:31.067486	1	3991324911
34063	2021-07-28 06:40:32.098871	1	3991325234
34064	2021-07-28 06:40:33.123105	0	3991325539
34065	2021-07-28 06:40:34.154604	0	3991325542
34066	2021-07-28 06:40:35.201233	0	3991325545
34067	2021-07-28 06:40:36.235066	0	3991325548
34068	2021-07-28 06:40:37.297482	0	3991325551
34069	2021-07-28 06:40:38.335373	0	3991325561
34070	2021-07-28 06:40:39.371354	0	3991325564
34071	2021-07-28 06:40:40.403040	0	3991325574
34072	2021-07-28 06:40:41.434694	0	3991325577
34073	2021-07-28 06:40:42.469395	0	3991325580
37233	2021-07-28 07:35:26.146739	0	3991338092
37234	2021-07-28 07:35:27.189258	0	3991338095
37235	2021-07-28 07:35:28.235888	0	3991338098
37236	2021-07-28 07:35:29.267303	0	3991338101
37237	2021-07-28 07:35:30.303020	0	3991338104
37238	2021-07-28 07:35:31.343124	0	3991338107
37239	2021-07-28 07:35:32.389791	0	3991338110
37240	2021-07-28 07:35:33.430906	0	3991338113
37241	2021-07-28 07:35:34.462121	0	3991338116
37242	2021-07-28 07:35:35.499413	1	3991339111
37243	2021-07-28 07:35:36.544054	1	3991339450
37244	2021-07-28 07:35:37.577903	1	3991339789
37245	2021-07-28 07:35:38.609318	1	3991340145
37246	2021-07-28 07:35:39.649913	1	3991340484
37247	2021-07-28 07:35:40.690109	1	3991341495
37248	2021-07-28 07:35:41.721469	1	3991341834
37249	2021-07-28 07:35:42.768070	1	3991341837
37250	2021-07-28 07:35:43.799284	1	3991342176
37251	2021-07-28 07:35:45.012729	1	3991346563
37252	2021-07-28 07:35:46.046332	1	3991348943
37253	2021-07-28 07:35:47.092962	1	3991349282
37254	2021-07-28 07:35:48.129750	1	3991349285
37255	2021-07-28 07:35:49.165713	1	3991350296
37256	2021-07-28 07:35:50.198356	1	3991350643
37257	2021-07-28 07:35:51.230232	1	3991351674
37258	2021-07-28 07:35:52.274219	1	3991351677
37259	2021-07-28 07:35:53.320848	1	3991352016
37260	2021-07-28 07:35:54.354378	1	3991352355
37261	2021-07-28 07:35:55.394048	1	3991352694
37262	2021-07-28 07:35:56.425279	1	3991354394
37263	2021-07-28 07:35:57.456662	1	3991354741
37264	2021-07-28 07:35:58.495404	1	3991355080
37265	2021-07-28 07:35:59.529553	1	3991355419
37266	2021-07-28 07:36:00.565603	1	3991356094
37267	2021-07-28 07:36:01.599814	1	3991356433
37268	2021-07-28 07:36:02.644057	1	3991357108
37269	2021-07-28 07:36:03.678316	1	3991357447
37270	2021-07-28 07:36:04.717730	1	3991358122
37271	2021-07-28 07:36:05.748944	1	3991359158
37272	2021-07-28 07:36:06.795575	1	3991359497
37273	2021-07-28 07:36:07.842174	1	3991360172
37274	2021-07-28 07:36:08.888340	1	3991361519
37275	2021-07-28 07:36:09.921423	1	3991361522
37276	2021-07-28 07:36:10.952843	1	3991362552
37277	2021-07-28 07:36:11.988005	1	3991362891
37278	2021-07-28 07:36:13.021489	1	3991363238
37279	2021-07-28 07:36:14.060609	1	3991363241
37280	2021-07-28 07:36:15.094170	1	3991363580
37281	2021-07-28 07:36:16.140799	1	3991364610
37282	2021-07-28 07:36:17.172010	1	3991365285
37283	2021-07-28 07:36:18.203290	1	3991365288
37284	2021-07-28 07:36:19.249920	1	3991366635
37285	2021-07-28 07:36:20.281657	1	3991366974
37286	2021-07-28 07:36:21.313037	1	3991367993
37287	2021-07-28 07:36:22.345215	1	3991369340
37288	2021-07-28 07:36:23.377410	1	3991370015
37289	2021-07-28 07:36:24.424040	1	3991370354
37290	2021-07-28 07:36:25.459248	1	3991370693
37291	2021-07-28 07:36:26.523199	1	3991371738
37292	2021-07-28 07:36:27.556900	1	3991371741
37293	2021-07-28 07:36:28.599885	1	3991372080
37294	2021-07-28 07:36:29.637234	1	3991372755
37295	2021-07-28 07:36:30.662106	1	3991373094
37296	2021-07-28 07:36:31.700131	1	3991374012
37297	2021-07-28 07:36:32.733831	1	3991374351
37298	2021-07-28 07:36:33.769713	1	3991376422
37299	2021-07-28 07:36:34.801542	1	3991377112
37300	2021-07-28 07:36:35.845827	1	3991377115
37301	2021-07-28 07:36:36.887141	1	3991378126
37302	2021-07-28 07:36:37.929022	1	3991378465
37303	2021-07-28 07:36:38.974129	1	3991379140
37304	2021-07-28 07:36:40.016014	1	3991379143
37305	2021-07-28 07:36:41.052306	1	3991380192
37306	2021-07-28 07:36:42.091795	1	3991380531
37307	2021-07-28 07:36:43.123277	1	3991380870
37308	2021-07-28 07:36:44.154779	1	3991380873
37309	2021-07-28 07:36:45.189292	1	3991381212
37310	2021-07-28 07:36:46.228950	1	3991382240
37311	2021-07-28 07:36:47.274595	1	3991383251
37312	2021-07-28 07:36:48.307137	1	3991383590
37313	2021-07-28 07:36:49.353737	1	3991384609
37314	2021-07-28 07:36:50.400535	1	3991384948
37315	2021-07-28 07:36:51.446433	1	3991385979
37316	2021-07-28 07:36:52.483388	1	3991386318
37317	2021-07-28 07:36:53.519263	1	3991386321
37318	2021-07-28 07:36:54.565893	1	3991386660
37319	2021-07-28 07:36:55.599405	1	3991386999
37320	2021-07-28 07:36:56.840982	1	3991389718
37321	2021-07-28 07:36:58.013753	1	3991394433
37322	2021-07-28 07:36:59.060120	1	3991394436
37323	2021-07-28 07:37:00.091538	1	3991394775
37324	2021-07-28 07:37:01.158281	1	3991395786
37325	2021-07-28 07:37:02.195731	1	3991396133
37326	2021-07-28 07:37:03.231844	1	3991396136
37327	2021-07-28 07:37:04.273168	1	3991396475
37328	2021-07-28 07:37:05.319790	1	3991396814
37329	2021-07-28 07:37:06.361506	1	3991397855
37330	2021-07-28 07:37:07.397475	1	3991397858
37331	2021-07-28 07:37:08.444139	1	3991398533
37332	2021-07-28 07:37:09.478353	1	3991399208
37333	2021-07-28 07:37:10.520689	1	3991399547
37334	2021-07-28 07:37:11.569339	1	3991400566
37335	2021-07-28 07:37:12.615970	1	3991400569
37336	2021-07-28 07:37:13.649868	1	3991400908
37337	2021-07-28 07:37:14.683612	1	3991401247
37338	2021-07-28 07:37:15.714830	1	3991401586
37339	2021-07-28 07:37:16.746041	1	3991402283
37340	2021-07-28 07:37:17.792670	1	3991403966
37341	2021-07-28 07:37:18.823888	1	3991404305
37342	2021-07-28 07:37:19.855297	1	3991405436
37343	2021-07-28 07:37:20.878570	1	3991405439
37344	2021-07-28 07:37:21.919224	1	3991406738
37345	2021-07-28 07:37:22.965853	1	3991407077
37346	2021-07-28 07:37:24.002236	1	3991407752
37347	2021-07-28 07:37:25.043896	1	3991407755
37348	2021-07-28 07:37:26.087072	1	3991408094
37349	2021-07-28 07:37:27.125102	1	3991409292
37350	2021-07-28 07:37:28.156512	1	3991409631
37351	2021-07-28 07:37:29.190727	1	3991409634
37352	2021-07-28 07:37:30.226674	1	3991409973
37353	2021-07-28 07:37:31.265525	1	3991410312
37354	2021-07-28 07:37:32.309732	1	3991411323
37355	2021-07-28 07:37:33.341403	0	3991412751
37356	2021-07-28 07:37:34.374976	0	3991412754
37357	2021-07-28 07:37:35.411217	0	3991412757
37358	2021-07-28 07:37:36.444055	0	3991412760
37359	2021-07-28 07:37:37.475575	0	3991412763
37360	2021-07-28 07:37:38.522205	0	3991412766
37361	2021-07-28 07:37:39.567552	0	3991412769
37362	2021-07-28 07:37:40.614191	0	3991412772
37363	2021-07-28 07:37:41.648524	0	3991412782
37364	2021-07-28 07:37:42.682016	0	3991412785
 
Last edited:

ASK THE COMMUNITY

TRENDING THREADS