chaz_music :
Of the typical failure modes that I would think of for CMOS would be ESD and ionizing radiation (Gamma rays, X-rays).
Something like 15 years ago, I saw a review of PSUs where they modified memtest86 to run a 12-hour bit-fade test. They found a statistically-significant difference in the error rate, across the range of different power supplies tested. I can no longer seem to locate the article, however.
chaz_music :
All of these failure modes can take quite a long time to surface. In the meantime, you have corrupted your OS and data (BSOD and "bad files"). The irreplaceable photos, MP3 files, term paper, etc. are in trouble.
If you're keeping them on a local filesystem that's heavily used, or if you're just copying them around or modifying them enough.
With cloud-based storage, the filesystem is hosted on a server with ECC and (aside from the initial upload) most of the accesses by the client PC are reads. So, very little risk, with a web-type access pattern. Using a network-based file server is similar.
With either type of remote storage, the risk is for users with large CAD or other datasets that they're frequently loading, modifying, and saving. This is somewhat mitigated by using some sort of server-side source control system (or backups). I say "somewhat" as, on more than one occasion, I've seen a collaboratively edited MS Word doc become increasingly flaky and "broken". To go back to the last known good version (and you don't even know if that's before the original error), you would lose a lot of edits. So, even backups are not full protection against data corruption.
I do emphasize the point about dataset size, since small datasets are unlikely to be corrupted by infrequent memory errors. If the memory errors are very frequent, you will experience basic system stability issues. So, the area for silent risk on a client PC with reliable network storage is mostly limited to large and/or frequently-modified, complex datasets.
chaz_music :
with the ZFS file system and even Microsoft's ReFS going after drive based bit errors, and SSDs technology doing the same, there is no longer much of an argument for not having ECC anymore.
SSDs and HDDs have had some level of error correction for a
long time.
chaz_music :
there is a cost delta, but that would go to zero after just a short while as ECC became the defacto memory architecture.
It's more than just architecture. It's extra PCB traces and extra chips on the DIMMs. Also, requires a small amount of energy to compute & check the ECC codes. It's truly not free, but most CPUs have ECC-protected caches, showing that the additional cost can indeed get rather small.
chaz_music :
And for the enthusiasts, allow ECC to be turned off for overclocking.
Or leave it on, depending on how much instability you're willing to accept.