Question Crucial MX500 500GB SATA SSD - - - Remaining Life decreasing fast despite only a few bytes being written to it ?

Page 13 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Lucretia19

Reputable
Feb 5, 2020
195
15
5,245
The Remaining Life (RL) of my Crucial MX500 ssd has been decreasing rapidly, even though the pc doesn't write much to it. Below is the log I began keeping after I noticed RL reached 95% after about 6 months of use.

Assuming RL truly depends on bytes written, the decrease in RL is accelerating and something is very wrong. The latest decrease in RL, from 94% to 93%, occurred after writing only 138 GB in 20 days.

(Note 1: After RL reached 95%, I took some steps to reduce "unnecessary" writes to the ssd by moving some frequently written files to a hard drive, for example the Firefox profile folder. That's why only 528 GB have been written to the ssd since Dec 23rd, even though the pc is set to Never Sleep and is always powered on. Note 2: After the pc and ssd were about 2 months old, around September, I changed the pc's power profile so it would Never Sleep. Note 3: The ssd still has a lot of free space; only 111 GB of its 500 GB capacity is occupied. Note 4: Three different software utilities agree on the numbers: Crucial's Storage Executive, HWiNFO64, and CrystalDiskInfo. Note 5: Storage Executive also shows that Total Bytes Written isn't much greater than Total Host Writes, implying write amplification hasn't been a significant factor.)

My understanding is that Remaining Life is supposed to depend on bytes written, but it looks more like the drive reports a value that depends mainly on its powered-on hours. Can someone explain what's happening? Am I misinterpreting the meaning of Remaining Life? Isn't it essentially a synonym for endurance?


Crucial MX500 500GB SSD in desktop pc since summer 2019​
Date​
Remaining Life​
Total Host Writes (GB)​
Host Writes (GB) Since Previous Drop​
12/23/2019​
95%​
5,782​
01/15/2020​
94%​
6,172​
390​
02/04/2020​
93%​
6,310​
138​
 
  • Like
Reactions: demonized
It must be frustrating that, despite all your investigative work, Crucial appears not to have done anything about the problem.

It's unclear whether Crucial did nothing. A message posted here (in 2022?) says MX500s newer than mine don't have the bug. Crucial revised the hardware and firmware.

The newer firmware can't be installed on the older MX500, which has an older controller chip. The last time I looked, Crucial hadn't provided ANY firmware updates for my MX500, during a period of time in which they provided several updates for newer models.

Some other recent messages say the quality of recent Crucial SSDs has gone down... a switch to QLC DRAM, and more failed drives reported.
 
It's unclear whether Crucial did nothing. A message posted here (in 2022?) says MX500s newer than mine don't have the bug. Crucial revised the hardware and firmware.

The newer firmware can't be installed on the older MX500, which has an older controller chip. The last time I looked, Crucial hadn't provided ANY firmware updates for my MX500, during a period of time in which they provided several updates for newer models.

Some other recent messages say the quality of recent Crucial SSDs has gone down... a switch to QLC DRAM, and more failed drives reported.



hi Lucretia..............may i know what firmware is on your MX500 (500GB) ?.............i just got a new MX500 (500GB) with M3CR045 firmware though i read it's problematic and i should update to M3CR046...........

As for my new MX500, after 6 hours (unused) the FTL Program page count is 2866 while Host Program page count is just 2 !! ...........2+ hours later, the FTL Program page count went to 3013 while Host Program page count is only 6............!!.............Total Writes is just 20.48KB...........so does this mean the memory cells will wear out much faster than normal ??
 
...may i know what firmware is on your MX500 (500GB)?...

As for my new MX500, after 6 hours (unused) the FTL Program page count is 2866 while Host Program page count is just 2 !! 2+ hours later, the FTL Program page count went to 3013 while Host Program page count is only 6 !! Total Writes is just 20.48KB. So does this mean the memory cells will wear out much faster than normal ??

The firmware on my 3.5 years old MX500 is version M3CR023. The last time I checked, Crucial hadn't offered any firmware updates for these older MX500 drives.

I don't know what "normal" FTL page writing is given such an abnormally low rate of writing by your host pc. I advise waiting to judge it until you use it. If your usage involves frequent heavy writing by the host pc, then excessive writing by the FTL controller might be small enough to neglect or even too small to notice, compared to the non-excessive writing by the FTL controller.

My MX500 didn't write excessively for its first couple of months of usage. After that, the excessive writing by the FTL controller appeared to grow worse over time. You'll be able to observe the trends if you keep a log of the relevant SMART data.
 
The firmware on my 3.5 years old MX500 is version M3CR023. The last time I checked, Crucial hadn't offered any firmware updates for these older MX500 drives.

I don't know what "normal" FTL page writing is given such an abnormally low rate of writing by your host pc. I advise waiting to judge it until you use it. If your usage involves frequent heavy writing by the host pc, then excessive writing by the FTL controller might be small enough to neglect or even too small to notice, compared to the non-excessive writing by the FTL controller.

My MX500 didn't write excessively for its first couple of months of usage. After that, the excessive writing by the FTL controller appeared to grow worse over time. You'll be able to observe the trends if you keep a log of the relevant SMART data.

my old MX500 (500GB) from 2019 is also on the M3CR023...........no problems with that......

for my new MX500 - i just used HD Tune to scan it............i wonder if that made the FTL high.............but later on, with the SSD just connected via USB unused..............the FTL number will go up several times an hour.........
 
my old MX500 (500GB) from 2019 is also on the M3CR023...........no problems with that......

for my new MX500 - i just used HD Tune to scan it............i wonder if that made the FTL high.............but later on, with the SSD just connected via USB unused..............the FTL number will go up several times an hour.........

Regarding your old MX500... For a couple of reasons, you might not have noticed the problem. (1) The host may have written to it at a high enough rate that excess FTL writing was relatively small compared to non-excess FTL writing, so not easily noticed even if you were looking for it. Or (2) the pc may have been off (or in deep sleep) most of the time, like my 5 years old laptop computer which has a 250GB MX500 that still has most of its Remaining Life (so I never bothered to log its SMART data or run the SSD selftests regime on it). Could you provide the relevant SMART data for your old MX500 -- power on hours, power cycle count, remaining life, average block erase count, total host writes, host pages written, and FTL pages written -- and estimate the percentage of time the pc was powered on?

Regarding your new, nearly unused MX500... You say the FTL number goes up several times per hour while the SSD is connected via USB "unused." (You didn't say host pages written actually remained constant, but I'll assume it did.) The FTL controller normally erases some blocks and moves some data during the hours AFTER host writing, to try to optimize future write performance, free up some space, and perhaps balance the total erases of each block of NAND. So the FTL writing you observed may be normal. Did the FTL pages written increase by a large amount several times per hour, or by a small amount several times per hour? We can't infer much unless you provide more precise data.
 
Regarding your old MX500... For a couple of reasons, you might not have noticed the problem. (1) The host may have written to it at a high enough rate that excess FTL writing was relatively small compared to non-excess FTL writing, so not easily noticed even if you were looking for it. Or (2) the pc may have been off (or in deep sleep) most of the time, like my 5 years old laptop computer which has a 250GB MX500 that still has most of its Remaining Life (so I never bothered to log its SMART data or run the SSD selftests regime on it). Could you provide the relevant SMART data for your old MX500 -- power on hours, power cycle count, remaining life, average block erase count, total host writes, host pages written, and FTL pages written -- and estimate the percentage of time the pc was powered on?

Regarding your new, nearly unused MX500... You say the FTL number goes up several times per hour while the SSD is connected via USB "unused." (You didn't say host pages written actually remained constant, but I'll assume it did.) The FTL controller normally erases some blocks and moves some data during the hours AFTER host writing, to try to optimize future write performance, free up some space, and perhaps balance the total erases of each block of NAND. So the FTL writing you observed may be normal. Did the FTL pages written increase by a large amount several times per hour, or by a small amount several times per hour? We can't infer much unless you provide more precise data.


many thanks for the detailed answer................so will the FTL writing wear out the memory cells ? if it won't, then i won't be worried about it.........

my old MX500 (500GB) - i use it now to store my data but it was used as a boot drive for almost a year........the Host program page count is higher than the FTL page count.............2 other MX500 (250GB) also likewise........

as for the new MX500 (500GB) - i'll clone it these few days and then use it as a boot drive for couple days and see how things develop.........
 
many thanks for the detailed answer................so will the FTL writing wear out the memory cells ? if it won't, then i won't be worried about it.........

my old MX500 (500GB) - i use it now to store my data but it was used as a boot drive for almost a year........the Host program page count is higher than the FTL page count.............2 other MX500 (250GB) also likewise........

as for the new MX500 (500GB) - i'll clone it these few days and then use it as a boot drive for couple days and see how things develop.........

Unless you provide the data I described -- SMART values plus an estimate of the amount of time the pc was powered on -- I can't analyze the behaviors of your SSDs. For instance, it's insufficient to say only that Host pages written is higher than FTL pages written on your old SSD, because both could be large (relative to the amount of time the pc was powered on), in which case excessive FTL writing might not be noticeable. And as I wrote before, it's too soon to analyze your new SSD.
 
Unless you provide the data I described -- SMART values plus an estimate of the amount of time the pc was powered on -- I can't analyze the behaviors of your SSDs. For instance, it's insufficient to say only that Host pages written is higher than FTL pages written on your old SSD, because both could be large (relative to the amount of time the pc was powered on), in which case excessive FTL writing might not be noticeable. And as I wrote before, it's too soon to analyze your new SSD.

ok...........i intend to run the new SSD as a boot drive later today..............then update the firmware...........

then i'll provide the figures................another guy on another forum says nothing is wrong............maybe i'm over-thinking yet again..........becoz the M3CR045 firmware it has got a bad reputation.........
 
Unless you provide the data I described -- SMART values plus an estimate of the amount of time the pc was powered on -- I can't analyze the behaviors of your SSDs. For instance, it's insufficient to say only that Host pages written is higher than FTL pages written on your old SSD, because both could be large (relative to the amount of time the pc was powered on), in which case excessive FTL writing might not be noticeable. And as I wrote before, it's too soon to analyze your new SSD.


hi, Lucretia..............i ran my ''problematic'' SSD for a couple days as the boot drive...........couldn't update the firmware despite multiple attempts..........but the SSD worked fine...........below are some numbers...........

the ''problematic'' (500GB)
==================
health status = 100%
average block erase count = 1
power on hours = 71
total host writes = 268GB (62GB written after cloned)
cumulative host sectors written = 562,399,934
host program page count = 5,102,516
FTL program page count = 2,115,114

the usual boot drive (250GB)
=====================
health status = 85%
average block erase count = 195
power on hours = 5003
total host writes = 10,461GB
cumulative host sectors written = 21,938,654,169
host program page count = 249,022,307
FTL program page count = 179,844,313
 
I assume "Health Status" means Remaining Life.

You haven't provided the estimate of the amount of time that the pc was powered on. It can't be deduced from the MX500's "Power On Hours" SMART attribute because the MX500 often enters a power-saving mode where it's still online but neglects to count hours (at least with older MX500 firmware, and assuming it's not busy running a selftest, since selftesting prevents the ssd from entering the power-saving mode). The "Power On Hours" of your 250GB MX500 is 5003 (approximately 208 days) which I assume is much less than the amount of time the pc was powered on unless you left the pc off most of the time. So I can't yet estimate the rate at which the host pc wrote to the ssd while the pc was on, and as I wrote in a previous post, a high rate of writing by the host pc to the ssd while the pc is on would make it hard to detect excessive writing by the ssd's FTL controller.

The 250GB MX500 has an endurance spec of 100 TBytes. The total host writes to yours is about 10 TB, which is 10% of the rated endurance, which ought to correspond to 90% Remaining Life. Its Remaining Life is 85%, which is less than 90% but not a lot less. Its write amplification (1 plus the ratio of FTL pages written to host pages written) is very low (as you mentioned in a previous post) which is excellent.

My pc has written about 13 TB to my 500GB MX500 over a period of about 1340 days (approximately 3.75 years since I began using the ssd on 7/28/2019) during which my pc was powered on nearly all the time. To compare my 13 TB to your 10 TB, we would need to consider the amount of time that your pc was powered on. For example, if your pc was off most of the time, that might explain the excellent write amplification even if your 250GB ssd has the FTL excessive writing bug, because the bug can't do any writing while the pc is off.

It;s still too soon to evaluate your new-ish 500GB ssd. But its SMART data doesn't reveal any problems yet.
 
It can't be deduced from the MX500's "Power On Hours" SMART attribute because the MX500 often enters a power-saving mode where it's still online but neglects to count hours (at least with older MX500 firmware, and assuming it's not busy running a selftest, since selftesting prevents the ssd from entering the power-saving mode).
Thank you.
I did not know the MX did this. I've been scratching my head over why my 1TB is only showing 725 hours, when a companion 860 EVO is showing 10,329 hours (430 days), and they were both installed at about the same time, late 2021.
 
@Lucretia19
Thanks and I had similar issue.
Bought Crucial MX500 1TB around June 2019(now is 4 years), recently found remaing lifetime is only 53%, it's late, unfortunately. I use Crucial Storage Executive and always see drive is healthy. Even check S.M.A.R.T. it's hard to notice Percentage Lifetime Remaining. When switch to CrystalDiskInfo, this issue is noticeable.
Here is my data:
  1. Total Host Write: 13070 GB -> 14TB
  2. 247 Host Program Page Count : 504711449
  3. 248 FTL Program Page Count : 24071679793
So WAF is around 48.7.
Now I'm trying to implement the .bat file and monitor future RL.
thanks
 
Last edited:
I assume "Health Status" means Remaining Life.

You haven't provided the estimate of the amount of time that the pc was powered on. It can't be deduced from the MX500's "Power On Hours" SMART attribute because the MX500 often enters a power-saving mode where it's still online but neglects to count hours (at least with older MX500 firmware, and assuming it's not busy running a selftest, since selftesting prevents the ssd from entering the power-saving mode). The "Power On Hours" of your 250GB MX500 is 5003 (approximately 208 days) which I assume is much less than the amount of time the pc was powered on unless you left the pc off most of the time. So I can't yet estimate the rate at which the host pc wrote to the ssd while the pc was on, and as I wrote in a previous post, a high rate of writing by the host pc to the ssd while the pc is on would make it hard to detect excessive writing by the ssd's FTL controller.

The 250GB MX500 has an endurance spec of 100 TBytes. The total host writes to yours is about 10 TB, which is 10% of the rated endurance, which ought to correspond to 90% Remaining Life. Its Remaining Life is 85%, which is less than 90% but not a lot less. Its write amplification (1 plus the ratio of FTL pages written to host pages written) is very low (as you mentioned in a previous post) which is excellent.

My pc has written about 13 TB to my 500GB MX500 over a period of about 1340 days (approximately 3.75 years since I began using the ssd on 7/28/2019) during which my pc was powered on nearly all the time. To compare my 13 TB to your 10 TB, we would need to consider the amount of time that your pc was powered on. For example, if your pc was off most of the time, that might explain the excellent write amplification even if your 250GB ssd has the FTL excessive writing bug, because the bug can't do any writing while the pc is off.

It;s still too soon to evaluate your new-ish 500GB ssd. But its SMART data doesn't reveal any problems yet.


many thanks for all the info...........perhaps there's nothing wrong after all.........

on another forum, a guy ran his MX500 (250GB) to the ground to test its write endurance............Crystaldiskinfo indicated it wrote over 1,100TB before it ran out of reserve memory blocks......
 
on another forum, a guy ran his MX500 (250GB) to the ground to test its write endurance............Crystaldiskinfo indicated it wrote over 1,100TB before it ran out of reserve memory blocks......

For several reasons, I think that's not a realistic test of ssd endurance.

In order to keep writing data, that test erases what it's already written, which means the FTL controller doesn't need to do any wear-leveling or moving of partially written blocks during the test. Also, host writing is a higher priority ssd task than FTL controller maintenance routines, so the test presumably prevented the maintenance routines from getting any runtime. (Just like continual selftesting would starve the maintenance routines. This is why I set the selftests to pause for 30 seconds once every 20 minutes... to provide some runtime for maintenance.) What uses up ssd life is not just the NAND pages written by the host pc; it's the sum of the pages written by the host pc plus the pages written by the FTL controller. In typical operations, only a fraction of the 1,100 TB would be written by the host pc and the rest would be "maintenance overhead" (plus buggy excess writing) written by the FTL controller. So the true endurance in typical operations would be a fraction of the 1,100 TB. Did the guy in the other forum provide enough info to deduce the write amplification during his test? I presume his write amplification was abnormally low compared to typical usage, because the FTL controller can't do much writing while the host pc is continually writing.

Another question about that test is whether it caused the ssd to do much of the writing using fast SLC mode. Use of SLC mode is a trade-off, temporarily boosting ssd write speed at a cost of lifespan -- the FTL controller eventually slowly rewrites the data using TLC mode, which stores more bits per cell. I don't know whether the MX500 has a limit of SLC-written blocks that causes it to switch back to slower TLC writing mode when the limit is reached. The fraction of data written using SLC mode might be very different during a test that involves continuous writing by the host pc, compared to the fraction written using SLC mode in typical operations, and this difference would affect the ssd's endurance. Did the guy provide any info about the writing speed during his test, such as how long it took to write 1,100 TB?

The test presumably used sequential writing mode, which is more efficient than randomly writing small amounts of data, because overwriting even one byte of an ssd block requires the entire block to be erased first. Writing small amounts is often done in typical operations, and requires more maintenance writing later by the FTL controller than massive sequential writing requires.

Another difference between that test and typical operations is that the test's continuous writing would prevent the ssd from entering its low-power mode (similar to how ssd selftests prevent low-power mode). Based on my observations before and after the start of the selftests regime, my MX500 ran about 5 degrees Celsius cooler without the selftesting, and I estimated power consumption was approximately one watt less than when the selftests regime is running. Although those are benefits of low-power mode, all else being equal, in posts here a few years ago I speculated that avoiding low-power mode might be a net benefit to ssd health by maintaining a nearly constant temperature, eliminating the physical contractions & expansions that temperature fluctuations cause. It didn't occur to me then, but it does now, that eliminating (or reducing) thermal contractions & expansions might help preserve the health & writability of NAND blocks, resulting in increased write endurance. (Just speculating, of course.) I believe continuous writing generates more heat than continuous selftest reading does, so the temperature difference during that guy's test would be more than 5C compared to typical operations, but the test would still eliminate temperature fluctuations like selftesting does, which might inflate the test's write endurance result. Did the guy in the other forum provide info about the ssd temperature during his test?
 
For several reasons, I think that's not a realistic test of ssd endurance.

In order to keep writing data, that test erases what it's already written, which means the FTL controller doesn't need to do any wear-leveling or moving of partially written blocks during the test. Also, host writing is a higher priority ssd task than FTL controller maintenance routines, so the test presumably prevented the maintenance routines from getting any runtime. (Just like continual selftesting would starve the maintenance routines. This is why I set the selftests to pause for 30 seconds once every 20 minutes... to provide some runtime for maintenance.) What uses up ssd life is not just the NAND pages written by the host pc; it's the sum of the pages written by the host pc plus the pages written by the FTL controller. In typical operations, only a fraction of the 1,100 TB would be written by the host pc and the rest would be "maintenance overhead" (plus buggy excess writing) written by the FTL controller. So the true endurance in typical operations would be a fraction of the 1,100 TB. Did the guy in the other forum provide enough info to deduce the write amplification during his test? I presume his write amplification was abnormally low compared to typical usage, because the FTL controller can't do much writing while the host pc is continually writing.

Another question about that test is whether it caused the ssd to do much of the writing using fast SLC mode. Use of SLC mode is a trade-off, temporarily boosting ssd write speed at a cost of lifespan -- the FTL controller eventually slowly rewrites the data using TLC mode, which stores more bits per cell. I don't know whether the MX500 has a limit of SLC-written blocks that causes it to switch back to slower TLC writing mode when the limit is reached. The fraction of data written using SLC mode might be very different during a test that involves continuous writing by the host pc, compared to the fraction written using SLC mode in typical operations, and this difference would affect the ssd's endurance. Did the guy provide any info about the writing speed during his test, such as how long it took to write 1,100 TB?

The test presumably used sequential writing mode, which is more efficient than randomly writing small amounts of data, because overwriting even one byte of an ssd block requires the entire block to be erased first. Writing small amounts is often done in typical operations, and requires more maintenance writing later by the FTL controller than massive sequential writing requires.

Another difference between that test and typical operations is that the test's continuous writing would prevent the ssd from entering its low-power mode (similar to how ssd selftests prevent low-power mode). Based on my observations before and after the start of the selftests regime, my MX500 ran about 5 degrees Celsius cooler without the selftesting, and I estimated power consumption was approximately one watt less than when the selftests regime is running. Although those are benefits of low-power mode, all else being equal, in posts here a few years ago I speculated that avoiding low-power mode might be a net benefit to ssd health by maintaining a nearly constant temperature, eliminating the physical contractions & expansions that temperature fluctuations cause. It didn't occur to me then, but it does now, that eliminating (or reducing) thermal contractions & expansions might help preserve the health & writability of NAND blocks, resulting in increased write endurance. (Just speculating, of course.) I believe continuous writing generates more heat than continuous selftest reading does, so the temperature difference during that guy's test would be more than 5C compared to typical operations, but the test would still eliminate temperature fluctuations like selftesting does, which might inflate the test's write endurance result. Did the guy in the other forum provide info about the ssd temperature during his test?


the only other thing i remembered was the guy's 250 GB MX500 had over 6000 erase count ..............
 
the only other thing i remembered was the guy's 250 GB MX500 had over 6000 erase count ..............

6000 erases is a lot. If taken at face value, this is a good sign for the MX500 actual endurance. During the next few years as a lot of MX500 drives reach 0% Remaining Lifetime, I expect we'll begin to see reports from users about how far beyond 0% they reach before dying.

I have only a little time right now to think about whether 6000 erases is consistent with writing 1100 TB to a 250 GB ssd. Here's a rough approximation: 1100 TB divided by 250 GB is about 4400, so if an ssd were perfectly efficient it would need to erase 250 GB about 4400 times to write 1100 TB. But ssds aren't perfectly efficient, which is why write amplification is greater than 1. If the write amplification during the test was approximately 1.36, that would account for the difference between 4400 and 6000. But I suppose we won't learn what the write amplification was.
 
6000 erases is a lot. If taken at face value, this is a good sign for the MX500 actual endurance. During the next few years as a lot of MX500 drives reach 0% Remaining Lifetime, I expect we'll begin to see reports from users about how far beyond 0% they reach before dying.

I have only a little time right now to think about whether 6000 erases is consistent with writing 1100 TB to a 250 GB ssd. Here's a rough approximation: 1100 TB divided by 250 GB is about 4400, so if an ssd were perfectly efficient it would need to erase 250 GB about 4400 times to write 1100 TB. But ssds aren't perfectly efficient, which is why write amplification is greater than 1. If the write amplification during the test was approximately 1.36, that would account for the difference between 4400 and 6000. But I suppose we won't learn what the write amplification was.


the guy's figures are from Crystaldiskinfo and also not from ''normal'' usage..............

for my case with my current 250GB boot drive (18 month-old)..........10,929 GB written after 203 erases.........write amplification is 1.72..........85% life remaining
 
for my case with my current 250GB boot drive (18 month-old)..........10,929 GB written after 203 erases.........write amplification is 1.72..........85% life remaining

85% Remaining Life is inconsistent with 203 erases assuming that by "erases" you mean Average Block Erase Count (ABEC). ABEC = 203 should correspond to Remaining Life of 86.5%. The equation is (1500-203) / 1500 = 0.864666667.

From a simple "endurance" perspective -- the ratio of the ssd's Life% Used to Host TB Written -- my MX500 has been performing better than yours, especially during the 3+ years that the ssd selftests regime has been running (selftests began 2/22/2020):

Yours: RL dropped 13.5% (from 100% to 86.5%) while 10.9 TB was written.​
Endurance Ratio: 1.24​
Mine, before selftests: RL dropped 7.8% (from 100% to 92.2%) while 6.4 TB was written.​
Endurance Ratio: 1.22​
Mine, during selftests: RL dropped 5.2% (from 92.2% to 87.0%) while 7.1 TB was written.​
Endurance Ratio: 0.73​
Mine, total: RL dropped 13% (from 100% to 87.0%) while 13.5 TB was written.​
Endurance Ratio: 0.96​

Are you sure that your Write Amplification is only 1.72? Before I began the selftests regime, my ssd's endurance ratio (1.22) was similar to yours (1.24), but my ssd's Write Amplification was 7.24. The equation is (1,399,713,786 FTL pages written / 224,290,793 host pages written) + 1 = 7.24.

Your "18 months" figure isn't as useful as it could be, because you haven't stated the fraction of that time that your pc has been off. Without that info, we can't estimate the rate at which the pc wrote to the ssd while the ssd was on. The greater the rate that the pc writes while it's on, the less noticeable is the FTL bug's excessive writing compared to the FTL controller's non-excessive writing (normal write amplification).

My pc has been on nearly 24 hours per day since I installed the ssd 45 months ago (late July 2019). (Turned off only for occasional maintenance or power outages.) About 43% (5.8 TB) of the pc's total writing to the ssd was during its first 5 months of operation. In late December 2019 I discovered that most of the writing had been by Windows logging and a few apps (Firefox, etc) that frequently wrote temporary data to the system drive by default. So to reduce the waste of ssd life, in late December 2019 I moved many of the Windows logs and the app caches to a hard drive.
 
85% Remaining Life is inconsistent with 203 erases assuming that by "erases" you mean Average Block Erase Count (ABEC). ABEC = 203 should correspond to Remaining Life of 86.5%. The equation is (1500-203) / 1500 = 0.864666667.

From a simple "endurance" perspective -- the ratio of the ssd's Life% Used to Host TB Written -- my MX500 has been performing better than yours, especially during the 3+ years that the ssd selftests regime has been running (selftests began 2/22/2020):

Yours: RL dropped 13.5% (from 100% to 86.5%) while 10.9 TB was written.​
Endurance Ratio: 1.24​
Mine, before selftests: RL dropped 7.8% (from 100% to 92.2%) while 6.4 TB was written.​
Endurance Ratio: 1.22​
Mine, during selftests: RL dropped 5.2% (from 92.2% to 87.0%) while 7.1 TB was written.​
Endurance Ratio: 0.73​
Mine, total: RL dropped 13% (from 100% to 87.0%) while 13.5 TB was written.​
Endurance Ratio: 0.96​

Are you sure that your Write Amplification is only 1.72? Before I began the selftests regime, my ssd's endurance ratio (1.22) was similar to yours (1.24), but my ssd's Write Amplification was 7.24. The equation is (1,399,713,786 FTL pages written / 224,290,793 host pages written) + 1 = 7.24.

Your "18 months" figure isn't as useful as it could be, because you haven't stated the fraction of that time that your pc has been off. Without that info, we can't estimate the rate at which the pc wrote to the ssd while the ssd was on. The greater the rate that the pc writes while it's on, the less noticeable is the FTL bug's excessive writing compared to the FTL controller's non-excessive writing (normal write amplification).

My pc has been on nearly 24 hours per day since I installed the ssd 45 months ago (late July 2019). (Turned off only for occasional maintenance or power outages.) About 43% (5.8 TB) of the pc's total writing to the ssd was during its first 5 months of operation. In late December 2019 I discovered that most of the writing had been by Windows logging and a few apps (Firefox, etc) that frequently wrote temporary data to the system drive by default. So to reduce the waste of ssd life, in late December 2019 I moved many of the Windows logs and the app caches to a hard drive.


below is latest numbers for my 250GB boot drive...........from Crystaldiskinfo


Total host writes = 11,007 GB

Average block erase count = 205

Lifetime remaining = 85%

Host program page count = 261,848,066

FTL program page count = 190,772,518


so the Write Amplification should be.......1.728
 
below is latest numbers for my 250GB boot drive...........from Crystaldiskinfo
Total host writes = 11,007 GB​
Average block erase count = 205​
Lifetime remaining = 85%​
Host program page count = 261,848,066​
FTL program page count = 190,772,518​
so the Write Amplification should be.......1.728

Your math for the Write Amplification calculation looks fine.

But your CrystalDiskInfo numbers for ABEC (205) and Lifetime Remaining (85%) seem inconsistent with each other. I've had a 250GB MX500 (like yours) in my laptop computer (as its only drive) since the spring of 2018, and I've occasionally taken screenshots of CrystalDiskInfo, which shows that each percent decrease of Remaining Life corresponds to 15 increments of ABEC, the same as for the 500GB MX500. (For example, Remaining Life reached 96% when ABEC reached 60.) So 0% of Remaining Life will correspond to ABEC=1500 and ABEC=205 should correspond to 86.333% Remaining Life, not 85%. Also the ssd rounds up Remaining Life, so 86.333% ought to display as 87% (and not reach 86% until ABEC reaches 15x14 = 210).

That inconsistency is a mystery.

Perhaps the smaller capacity of the 250GB drive affects the Write Amplification. But I don't understand why the effect would be to significantly decrease Write Amplification rather than increase it. The amount of unused space might make a difference to Write Amplification, but I haven't tried to quantify the effect of unused space.

According to CrystalDiskInfo, my 250GB MX500 has a similar Write Amplification but a better Endurance Ratio than yours:
Total host writes = 4,645 GB​
Average block erase count = 67​
Lifetime remaining = 96% (95.53% based on ABEC)​
Host program page count = 169,048,493​
FTL program page count = 127,365,000​
Write Amplification = 1.753
Endurance Ratio = (100 - 95.53) / 4.645 = 0.96
132GB free space and 231GB total space (according to Windows)

Another comparison is the ratio of Host Program Page Count to Total Host Writes. One might expect them to be similar if the page sizes are the same, and page sizes are presumably identical when comparing two 250GB MX500 drives.
Your 250GB MX500: 261,848,066 / 11,007​
= 23,789 Pages per GB​
My 250GB MX500: 169,048,493 / 4,645​
= 36,394 Pages per GB​
That's a large difference. I think it implies my ssd has been less efficient than yours, since mine has required more ssd pages to be written than yours for the same amount of host bytes written. Since sequential writing is more efficient is more efficient than random writing, the different ratios might mean my laptop has been writing more random small amounts and less sequential data than yours has.

So there's another mystery: Why does my 250GB ssd have a better Endurance Ratio than yours but a worse Pages Per GB ratio?

Perhaps one of us has a version of CrystalDiskInfo that isn't reliable?
 
maybe the firmware is different so results will be different ?

my 250GB MX500 always have over 70% free space..........anyway, as long as it can last me 10 years (in theory) - it's more than enough..........LOL

in the last 1-2 years, the MX500 has dropped a lot in price.........hopefully not in quality.....
 
maybe the firmware is different so results will be different ?

my 250GB MX500 always have over 70% free space..........anyway, as long as it can last me 10 years (in theory) - it's more than enough..........LOL

in the last 1-2 years, the MX500 has dropped a lot in price.........hopefully not in quality.....

M3CR020 is the version of the firmware in my 250GB MX500.

The next time that your 250GB MX500's Remaining Life decrements, I hope you'll post its ABEC count here, so we can see how many ABEC increments correspond to each percent decrease of Remaining Life with your firmware version. Given the counts you've already posted, it might be 13 ABEC increments per percent decrease of RL if your ssd firmware rounds RL up, or 14 if it rounds RL down:
(1300 - 205) / 1300 = 84.4%
(1400 - 205) / 1400 = 85.4%​

Your 70% of free space is ambiguous, because the ssd's "total capacity" depends on what's included when software reports total capacity. By default, Crucial makes some of the capacity unavailable to the host pc... this is called "ssd overprovisioning." The user can increase or decrease the amount of overprovisioning by using a Crucial software utility. Although that space is unavailable to the pc's OS and apps, the ssd's FTL controller uses it in a way that can supposedly increase the ssd's endurance.

(However, I don't understand why overprovisioning is more effective at increasing endurance than making that space available but leaving it unused. The only difference I see is that blocks in unavailable space can be remapped to replace bad blocks in available space. But this remapping capability doesn't imply overprovisioning needs to be done when the ssd is new or young, because it could be postponed until the ssd is old and has developed many bad blocks... the Crucial utility could be used then to convert free available space to overprovisioning space to make more blocks usable for remapping. Since it's simpler to overprovision early and then forget about it, maybe it's just simplificity rather than necessity that favors early overprovisioning with Crucial's default amount.)

Ten years sounds like a reasonable minimum lifespan. 250GB will probably seem small after ten years, so you might find a reason to replace it even if it still has plenty of RL.
 
my 250GB MX500 always have over 70% free space..........anyway, as long as it can last me 10 years (in theory) - it's more than enough..........LOL

I should add that I don't run the selftests regime on my 250GB MX500. The laptop is usually off, the laptop doesn't write much to the ssd, and the Write Amplification has been low, so there hasn't been a reason to run the selftests or automate logging of its SMART data.
 
M3CR020 is the version of the firmware in my 250GB MX500.

The next time that your 250GB MX500's Remaining Life decrements, I hope you'll post its ABEC count here, so we can see how many ABEC increments correspond to each percent decrease of Remaining Life with your firmware version. Given the counts you've already posted, it might be 13 ABEC increments per percent decrease of RL if your ssd firmware rounds RL up, or 14 if it rounds RL down:
(1300 - 205) / 1300 = 84.4%​
(1400 - 205) / 1400 = 85.4%​

Your 70% of free space is ambiguous, because the ssd's "total capacity" depends on what's included when software reports total capacity. By default, Crucial makes some of the capacity unavailable to the host pc... this is called "ssd overprovisioning." The user can increase or decrease the amount of overprovisioning by using a Crucial software utility. Although that space is unavailable to the pc's OS and apps, the ssd's FTL controller uses it in a way that can supposedly increase the ssd's endurance.

(However, I don't understand why overprovisioning is more effective at increasing endurance than making that space available but leaving it unused. The only difference I see is that blocks in unavailable space can be remapped to replace bad blocks in available space. But this remapping capability doesn't imply overprovisioning needs to be done when the ssd is new or young, because it could be postponed until the ssd is old and has developed many bad blocks... the Crucial utility could be used then to convert free available space to overprovisioning space to make more blocks usable for remapping. Since it's simpler to overprovision early and then forget about it, maybe it's just simplificity rather than necessity that favors early overprovisioning with Crucial's default amount.)

Ten years sounds like a reasonable minimum lifespan. 250GB will probably seem small after ten years, so you might find a reason to replace it even if it still has plenty of RL.

ok, i'll monitor Crystaldiskinfo like a hawk from now on..........i think the remaining life should drop to 84% this coming week...........LOL

by the way, i do manually empty my Recycle Bin every time i deleted something...........will that account for the higher number of erases ?
 
You could set CrystalDiskInfo to alert you when Remaining Life drops below 85%.

Emptying the Recycle Bin allows the ssd to write to those blocks after it erases them. But the need to erase blocks is caused by writing to the ssd, and the amount of writing doesn't depend on whether or how often you empty the RB. (Unless you run out of free space because you haven't emptied the RB) If the blocks occupied by files in the RB aren't available for erasing & writing because the RB hasn't been emptied, the ssd will erase other free blocks in order to write somewhere. (Unless it has run out of free space, in which case you need to free some space by emptying the RB, at least partially.)

The mystery I mentioned is why your ABEC count is LOWER than what I expect for 85% Remaining Life. So I'm unsure what you're referring to where you ask about "the higher number of erases."