It's unclear what you meant by "this" where you wrote "it does indeed appear this is resolved [...]" Perhaps you meant the "excessive F8 writing" WAF bug. But you might have meant the "low Power On Hours" bug that we've also discussed here. Or maybe you meant both. You highlighted POH in your screenshot, but you wrote about F8. (People often use pronouns ambiguously without realizing it.)it does indeed appear this is resolved in the MX500's with the new controller/firmware.
[Deleted Diceman's CrystalDiskInfo screenshot. You can see it in his post above.]
There hasn't been any significant increases to F8 that weren't linked to F7s.
I don't think your CrystalDiskInfo data is solid evidence that the new controller/firmware has fixed the excessive F8 writing bug. Your ssd's F6, F7 and ABEC are still very low, indicating the host pc hasn't yet written much to the ssd:
F6 = 2,042,516,406 sectors. Since each sector is 512 bytes, it means 974 GB written by the host.
F7 = 27,672,197 NAND pages. Since each NAND page is approximately 37,000 bytes, it means approximately 954 GB written by the host.
ABEC = 10 (Average Block Erase Count).
When I first noticed the problem on my 500GB MX500 about 2 years ago, my pc had written more than 6 times as much to the ssd as yours has. The first time that I logged F7 was on 1/15/2020 (about 3 weeks after I first noticed the WAF problem), and F7 was 214,422,794, F6 was 6172 GB, and ABEC was 90. Your ssd's ratio of F7 to ABEC is similar to mine before I began running the selftests mitigation.
On 8/31/2019 when my ssd was about a month old, its ABEC reached 15 (which corresponds to 1% of lifetime used). F6 indicated the host pc had written 1,772 GB. This ABEC=15 data is the closest data I have to your ABEC=10 data, which is why it's very relevant. My ssd's ratio of F6 to ABEC when its ABEC reached 15 is similar to your ssd's ratio: 1772GB/15 versus 974GB/10.
Anecdotal evidence suggests it takes awhile for the "excessive F8 writing" WAF bug to become noticeable, and then the magnitude of the problem accelerates. So, please keep logging your ssd's SMART data at least occasionally, and occasionally post it here so we can see whether your ssd eventually develops the problem.
At least one person in this forum thread suggested the problem might not manifest on every unit, and might be triggered on those units by a bad event that corrupts a database maintained by the firmware: perhaps a power-off before a clean pc shutdown, or perhaps a power surge. If the chance of having a triggering event isn't zero, the chance would increase with time.
One thing for potential buyers to keep in mind while it's still in doubt whether a new version fixed the bug is that the new firmware might prevent selftests from mitigating the bug. Presumably the selftests work because the buggy routine is a lower priority than the selftest routine, and the new firmware might reverse the priorities or make them equal.
It does look like Crucial's new controller/firmware may have fixed the "low Power On Hours" bug. Your POH is 1506. My POH was only 883 on 1/15/2020, which was about 4,000 hours (5.5 months) after the ssd was installed, and the pc had been powered on nearly 24 hours per day.