Re: Optane: I had no idea! Looked around and found:
...
Ouch! Guess I'd better power up that P5000X drive I have lying around, and see how bad it is. Does 3 months even count as "Static" memory? Sounds more like DRAM, with a 3 month long refresh cycle. No wonder Intel isn't making them, anymore (well, among other things).
Somehow, I think it just can't be that bad. If it were, it'd be "common knowledge", because Intel did sell enough Optane drives to consumers that lots of people would've been caught out by it. I think the 90-day thing is probably a lot more like either a minimum they felt like they
had to say or maybe an extreme corner case. Do please let us know what you find.
BTW, is it a P5800X? I have one of those. Only 400 GB, which is plenty for a boot drive. I bought it when they announced the product line discontinuation, but then it sat on the shelf while I tried to decide what to do with it. I've finally reached the point where I'm ready to install it in a machine I use daily.
Powering it up, regularly (maybe once a year?) running trim, leaving it up for say, a couple of days (enough time for "StaticDataRefresh technology to do it's thing), and then, running chkdsk /r, leave it up for another day, and shut it back down until the next year should help.
Yeah, that uncertainty is what I hate, because I have a couple machines I rarely use. In the past, I figured just reading all the blocks might be enough to trigger the drive to copy or refresh all the weak ones.
Maybe I will switch to using the Linux badblocks tool, which has a mode where it reads the old block, writes a test pattern, reads back the test pattern, and then puts your original data back. That's a couple more writes than I'd like, but it does guarantee the block is refreshed!
Also, I just have to point out that certain people in this thread have been implying the only thing to worry about is a corrupt file. Of course, that's not true. Some of what the drive holds is filesystem datastructures. If those get corrupted, you could lose vast numbers of your files. Statistically, it's a lot more likely that isolated errors land in regular file data, but it's really just a roll of the dice away from hitting the filesystem.
every 5 years, do a low level format of the drive (preferably using an SSD tool) to get all of the usable cells back to 0 state), write the image back to it, and shut it back down?
Like I've been discussing with
@jp7189 , reformatting is probably a waste of time. If you want to force the drive to be refreshed, either use a tool which does a block-level refresh or just copy the filesystem image off and then back onto the drive.
If you have any way to force it to run TRIM, after that refresh, that would be a
really good idea, because the underlying FTL (Flash Translation Layer) would then think that your volume is 100% full, even if its not. TRIM tells the drive which blocks you're not using, so the FTL can add them to its pool of reserve blocks to use for wear-leveling.
Am not sure whether anyone ever wrote a guide on how to get the maximum powered down lifespan out of an SSD, probably not. Do know, keeping the powered down SSD cool helps it retain the charge states.
Yeah, it's the kind of thing where most of us are flying blind. You don't really know what's going on until errors start popping up. I'm sure people who work on these drives have more insight. There ought to be ways they can see low-level details, like rates of charge decay. I think modern NAND must do some sort of calibration involving cells that hold a known value and then compensating the rest of the cells in the block by their voltage. The SSD's controller probably has ways of seeing this information and it'd give much deeper insight and earlier feedback into the state of the drive.
Interesting discussion, for sure. Thanks for sharing your details!
: )