[SOLVED] Preventing stored SSDs from losing data

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
When storing an SSD with data on it, my understanding is that it must be powered up periodically to prevent data loss. The questions that I can't seem to find a definitive answer to is whether it needs to be connected to a computer when powering up, and whether it actually needs to be read by the computer. If this is true, must the entire drive be read?

I also read one source that indicated that the housekeeping firmware in some SSDs will automatically read all of the data and rewrite it if it finds that it is weak. This same source indicated that this will vary from manufacturer to manufacturer. Anybody know anything about this or where I can find definitive answers?

You may wonder why I'm using SSDs instead of HDDs. The reason is that these aren't normal backups. They are drop in replacement boot drives containing authorized licensed installs of music software I own. It's an insurance policy against the software not being supported someday or the installers not running on the old operating systems I need for this software. I have backup computers as well.

By the way, these are Mac drives if it matters.

Am I nuts? You bet! o_O
 
Solution
When storing an SSD with data on it, my understanding is that it must be powered up periodically to prevent data loss. The questions that I can't seem to find a definitive answer to is whether it needs to be connected to a computer when powering up, and whether it actually needs to be read by the computer. If this is true, must the entire drive be read?

I guess I missed this thread and I see now you have had lots of answers and research, but I'll still give my quick (and educated) take:

When you power on the drive it pushes the firmware and boot code to SRAM and does a variety of tasks. For example, if you had a power loss event with data-in-flight the drive will determine this and restore existing lower page values before...

USAFRet

Titan
Moderator
I started to, but found other info that indicated that simply powering it may not do anything. The problem with this whole thing is that the companies making the SSDs don't make the inner workings of their drives public. But, if you look at the structure of a flash cell, there appears to be no way that simply applying power to the circuitry could specifically re-charge the cells that need it. That's the whole point of these things; it's isolated until you read it. I have no way to know if reading the whole drive will do anything, but I do know that the management scheme can read voltage level off of a cell and will re-write it elsewhere to avoid data loss. Reading the whole drive will force it to see what needs attention.
And using Linux would presumably be little different than Windows.

A Linux LiveUSB is no different.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
This link is to an article on Quora written by an engineer named Dave Haynie. He actually works with a friend of mine and I'm trying to get in touch with him so I can ask him some questions. Anyway, check out the second paragraph in the section titled Data Retention.

The other interesting thing I ran across is this from 2011. As you can see in the reply from Kingston, simply powering up the SSD or reading it doesn't rejuvenate the data.

I wish I could find an actual technical document on this, but so far, no luck. Beyond my attempt to get in touch with Dave Haynie, I also put in a request with this guy I've been talking to at Kingston to talk to someone in engineering. We'll see if that happens.
 

USAFRet

Titan
Moderator
This link is to an article on Quora written by an engineer named Dave Haynie. He actually works with a friend of mine and I'm trying to get in touch with him so I can ask him some questions. Anyway, check out the second paragraph in the section titled Data Retention.

The other interesting thing I ran across is this from 2011. As you can see in the reply from Kingston, simply powering up the SSD or reading it doesn't rejuvenate the data.

I wish I could find an actual technical document on this, but so far, no luck. Beyond my attempt to get in touch with Dave Haynie, I also put in a request with this guy I've been talking to at Kingston to talk to someone in engineering. We'll see if that happens.
That is mostly about flash drives.

But referring to "SSD"....
"So the larger memories coming out today, 256GB per chip and so, are actually 3D chips using larger geometries. Some of these stack 48 bits worth of data on separate layers, and the reliablity is up. "

How old is this?


And a decade old statement from Kingston (again, referring mostly to flash drives) is...old.
The SSD landscape has changed significantly.


Agreed that just "power on" is probably not enough.
But being read from a relevant OS (Windows or Linux)...how is that viable?
The drive firmware would kick in, and do all that stuff behind the scenes. TRIM, wear leveling, etc.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
How old is this?
It says it was updated about a year ago.
Agreed that just "power on" is probably not enough.
But being read from a relevant OS (Windows or Linux)...how is that viable?
The drive firmware would kick in, and do all that stuff behind the scenes. TRIM, wear leveling, etc.
That's the point. Reading the data in and of itself does nothing. Forcing a read of the entire SSD shows the internal management system the condition of all of its data. It will then fix whatever needs fixing.
 

USAFRet

Titan
Moderator
Updated a year ago...what parts of it?
Anyway....

How is hooking it up to a running system not enough?

If I plug a drive into a USB dock, I can see the entire contents, in either Windows or Linux.
To see, it must first be read. Correct?

I imagine some parts of the various drives in my system have not been accessed my me or Windows in years.
But the SSD firmware IS accessing it, behind the scenes.

Bottom line, though...I doubt you'll find any definitive 'time limit' on an unpowered drive.
Especially a single instance of said drive.
 

USAFRet

Titan
Moderator
Further thoughts....

Since this software is tied to a specific drive S/N...

A backup Image, stored elsewhere (possibly multiple copies of it), reapplied to this same drive at some point in the future.
Same drive, same software. Just not relying on storing it forever on this same SSD.

Would that work?
 
You can use "dd" in Linux to read every sector from the source device to the null device (dev\nul). Just make absolutely certain that you specify the source and destination correctly.

I haven't tried it in Windows 10, but this command used to work in earlier Windows versions. You need to execute it within a CMD window.

Code:
xcopy X: nul /r /i /c /h /k /e /y

Some of the options aren't necessary for the NUL device.

X: is the drive letter of the source.
 
Last edited:

deesider

Distinguished
Jun 15, 2017
308
147
18,890
How is hooking it up to a running system not enough?

If I plug a drive into a USB dock, I can see the entire contents, in either Windows or Linux.
To see, it must first be read. Correct?
Plugging in a drive and seeing what the contents are doesn't require accessing all of the contents. The system only reads the directory of data rather than all of the data.

I think you're right though, having the drive turned on and active should let it do it's maintenance and housekeeping tasks to keep the data valid. The indvidual cells won't be refreshed automatically, it isn't like the charges can be topped up like a battery. But given that the often quoted retention time is 10 years for an unused drive, an active drive must surely be refreshing the data at some stage.

I think for th OP's situation the bigger long term concern would be the PC breaking down rather than the drive.
 
http://www.puransoftware.com/DiskFresh.html

DiskFresh is a simple yet powerful tool that can refresh your hard disk signal without changing its data by reading and writing each sector and hence making your disk more reliable for storage. It also informs you if there are any damaged/bad sectors so you know the right time to replace your disk.

Can work in read-only mode too so as to just inform about bad sectors.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
Since this software is tied to a specific drive S/N...

A backup Image, stored elsewhere (possibly multiple copies of it), reapplied to this same drive at some point in the future.
Same drive, same software. Just not relying on storing it forever on this same SSD.

Would that work?
Funny you should mention this, as I was looking into this yesterday. I have used Carbon Copy Cloner for years and used it to clone the drives I was experimenting with. But, it turns out that although it makes a bootable copy of a drive, it isn't an exact copy of it. So, after some further investigation into various cloning software, I learned that Apple's own Disk Utility will make an exact clone. But for some reason, cloning in Disk Utility only cloned the OS to the new drive. However, it can also create disk images. I tried that and found that its disk images are complete. So, I tried it on a new SSD and it worked . . . except for a few things that apparently look at the drive ID. So, today I'm going to try it with a second drive that all of the software has been authorized on to see if that works. So, I'll let you know.
 

USAFRet

Titan
Moderator
Funny you should mention this, as I was looking into this yesterday. I have used Carbon Copy Cloner for years and used it to clone the drives I was experimenting with. But, it turns out that although it makes a bootable copy of a drive, it isn't an exact copy of it. So, after some further investigation into various cloning software, I learned that Apple's own Disk Utility will make an exact clone. But for some reason, cloning in Disk Utility only cloned the OS to the new drive. However, it can also create disk images. I tried that and found that its disk images are complete. So, I tried it on a new SSD and it worked . . . except for a few things that apparently look at the drive ID. So, today I'm going to try it with a second drive that all of the software has been authorized on to see if that works. So, I'll let you know.
And if this were in the Windows world, I'd recommend Macrium Reflect for that function.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
I think you're right though, having the drive turned on and active should let it do it's maintenance and housekeeping tasks to keep the data valid.
If this was a drive mounted on the computer all of the time, I'd agree with you. But, it won't be . . . and how long does it take to get to everything? I actually thought about buying a machine to just run with all of the drives mounted on it, but an iron lung for SSDs was just too weird a concept even for me.
I think for th OP's situation the bigger long term concern would be the PC breaking down rather than the drive.
Because I use older machines, I've picked up used duplicates for peanuts relative to new ones. So, I've got that covered.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
You can use "dd" in Linux to read every sector from the source device to the null device (dev\nul). Just make absolutely certain that you specify the source and destination correctly.

I haven't tried it in Windows 10, but this command used to work in earlier Windows versions. You need to execute it within a CMD window.

Code:
xcopy X: nul /r /i /c /h /k /e /y

Some of the options aren't necessary for the NUL device.

X: is the drive letter of the source.
Thanks for this info. I'm on a Mac, but I'm sure there's some way to do it. I would probably set it up as a script.
In my poking around, I ran across this app, but didn't know it had a read only mode. It's PC only, but I just need it to read everything, so I guess it might work. Thanks for the info, fzabkar.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
And if this were in the Windows world, I'd recommend Macrium Reflect for that function.
Besides my two Macs, I also have a PC in my music setup. I haven't gotten to it yet, but will have to deal with this same problem on that machine. But, due to a previous conversation and recommendation from you, I use Macrium to back up that machine. So, I'll be looking into that soon, but I'm still working on buying a backup PC.
 

deesider

Distinguished
Jun 15, 2017
308
147
18,890
Besides my two Macs, I also have a PC in my music setup. I haven't gotten to it yet, but will have to deal with this same problem on that machine. But, due to a previous conversation and recommendation from you, I use Macrium to back up that machine. So, I'll be looking into that soon, but I'm still working on buying a backup PC.
The simplest and yet still over-engineered approach would be to clone the drive onto a HDD (make several copies onto the HDD just in case), upload a copy to iDrive (and onedrive, googledrive etc), then put the SSD and the HDD in a cupboard. In 5 years time if you want to refresh your SSD backup you can just recopy the clone back onto the SSD.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
The simplest and yet still over-engineered approach would be to clone the drive onto a HDD (make several copies onto the HDD just in case), upload a copy to iDrive (and onedrive, googledrive etc), then put the SSD and the HDD in a cupboard. In 5 years time if you want to refresh your SSD backup you can just recopy the clone back onto the SSD.
Yeah, that's what I'm thinking. I still need to have specific SSDs if I want access to the few apps and sound libraries that look at the drive ID, but for the bulk of what I have, an image could be loaded on any SSD, but it would still be tied to a particular computer.

As for cloud backups, I don't trust the companies that control them. I currently have multiple backups with a copy in a safe deposit box, so I don't think I need any more than that.

Thanks, deesider.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
FYI, I sent the following questions to Kingston over the last week and received a reply today.

The questions . . .

1) When powering up an SSD to rejuvenate its bit charges, is it necessary to have it connected to a computer? In other words, will the bit charges be rejuvenated if the SSD is simply connected to a 5 volt power supply, or does it require some software interaction with the computer to actually rejuvenate the bits? I saw one reference that stated that "most newer SSD controllers implement patrol read and patrol scrub algorithms that periodically read all LBAs to repair or refresh them as necessary." Is this true?

2) If power is all that is required, is there any concern when powering down? In other words, if the drive is in the middle of re-writing data to a block, and the power is suddenly removed, can that cause data corruption. One advantage I can see in actually connecting the SSD to a computer is that the action of unmounting it will give the SSD a "heads up" that a power down is coming.

3) In either scenario listed above, how long does the drive need to be powered up, or powered up and connected to a computer, to rejuvenate all of its data?


4) Is it then true that, when an external SSD (USB connected, for example) is unmounted by the operating system, the system within the SSD will finish whatever it was doing to avoid data loss?

5) When an external SSD is connected to a computer, how long does it need to stay connected to refresh any data that is in danger of being lost?

Kingston's engineering department replied . . .

In general, client SSD data retention is rated for 1 year without powering on the SSD, Enterprise SSD data retention is rated at 3 months w/o power. This doesn’t mean after those thresholds are passed, the data is instantly corrupt, but instead, just what we expect it meet based on a JEDEC standard. While all SSDs differ in various ways, on most SSDs a patrol read will be triggered if data is found to be weak. Very weak data will be re-written.

With regards to item 1 and 3, to ensure that all data was still valid past that data retention limit, it would need to be accessed (read). If it can be read, it’s still valid. One could also read the drive from end to end to be sure the data was read and allow weak data processing to its job to re-write marginal data. Best practice if concerned about data reliability would be to backup the data, secure erase the SSD and then re-image it with the backup. We don’t think there’s an answer to ‘how long the drive needs to be powered up’ as it would vary from drive to drive, based on a variety of things like density, data present on drive, last time data was accessed, how long the drive has been sitting, etc. Really hard to pinpoint an answer for this.


Additionally for item 2, we don’t think ‘just power is required’ based on the engineering teams response in item 1. However, anytime you unplug or shut a system down, you risk corruption. PLP caps (capacitors) help to prevent this, as do firmware(s) that create ‘checkpoints’ on drives w/o PLP. In general, whether internally or externally installed, we advise you to NOT cut off power to the drive. Other than these typical concerns, there shouldn’t be an additional worry when a patrol read or other background operation is ongoing.

The following reply was to questions 4 and 5 . . .

If an SSD is connected externally via enclosure (or USB for that matter), we’d always rely on the ‘blinking light’ present on the device to indicate when the drive is done doing what it needs to. The OS can say “device disconnected’, but this doesn’t always sync up with the drives activity. Again, no answer on ‘how long a drive needs to stay connected for refreshing it’s data’.
 
  • Like
Reactions: fzabkar

Karadjgne

Titan
Ambassador
SSDs have a general shelf life minimum of about 6 months, but can go as high as a year before losing data integrity. They don't store data as such, they store differential voltages in the transistors, those voltages being the 1's and 0's of machine language.

The reason you need to power them on as an SSD is because of the TRIM function which verifies and 'fixes' any voltages stored by rotating them in/out of the transistors. Without TRIM, that doesn't happen. It's all part of OS garbage collection which works in conjunction with the read/write processes.

As to how long the drive needs to be actively plugged in, there's no simple answer as that all depends on the size of the drive, the amount of data stored, if it's indexed etc.

Personally, I'd just plug it into the pc, let windows take a good look at the data, and when you can pull up the entire contents it should be good. At worst do a quick AV test on the drive, guaranteed all the data will be touched/searched which then will certainly be voltage refreshed.

If concerned about external influences writing to the ssd, I'm pretty sure to can change the attributes of the ssd to 'read only', changing it back to read/write if/when adding data.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
The reason you need to power them on as an SSD is because of the TRIM function which verifies and 'fixes' any voltages stored by rotating them in/out of the transistors. Without TRIM, that doesn't happen. It's all part of OS garbage collection which works in conjunction with the read/write processes.
That's not what TRIM does. It's a mechanism that allows the OS to tell the SSD that particular data that is stored is no longer needed (i.e. deleted files). Otherwise, that data will eat up that space forever. It doesn't matter with a HDD because the OS just tells it to overwrite the unneeded data. Can't do that with an SSD.

As for garbage collection, I don't believe the OS knows of its existance. It's the smarts in the SSD, working behind the scene, to clean up the wasted space it creates when writting partially empty blocks.
Personally, I'd just plug it into the pc, let windows take a good look at the data, and when you can pull up the entire contents it should be good. At worst do a quick AV test on the drive, guaranteed all the data will be touched/searched which then will certainly be voltage refreshed.
Yep, that's the reason I was asking for something that would simply read the entire drive.
 

Karadjgne

Titan
Ambassador
SSD TRIM is complementary to garbage collection. The TRIM command enables the operating system (OS) to preemptively notify the SSD which data pages in a particular block can be erased, allowing the SSD's controller to more efficiently manage the storage space available for data. TRIM eliminates any unnecessary copying of discarded or invalid data pages during the garbage collection process to save time and improve SSD performance.

The SSD TRIM command simply marks the invalid data and tells the SSD to ignore it during the garbage collection process. The SSD then has fewer pages to move during garbage collection, which reduces the total number of program/erase cycles (P/E cycles) to the NAND flash media and prolongs the life of the SSD. NAND flash wears out due to the long-term effects of the P/E cycle, so reducing the number of erases can lengthen the endurance of the SSD.

In order for TRIM to function, the host's OS and the SSD must support it. For example, in a Windows environment, when an SSD reports that it has TRIM support, the OS will disable disk defragmentation and enable TRIM. When a user deletes a file, the OS sends a TRIM command to the SSD controller to tell it which data pages can be erased when the garbage collection process takes places. The TRIM command and the write command operate independently of each other. The user also has the option to initiate the TRIM command manually or schedule it on a daily basis.

Nothing gets done without OS say-so. No garbage collection, no TRIM, no searches, indexing, erasing, shuffling, nothing. The controller in the ssd is a good little soldier who only follows orders, never initiates thinking for itself. But when you do plug an SSD into a pc, it's guaranteed that busy-body Windows wants to know exactly what it's got, is it different to the prior hardware makeup, is it library accessible, and why not if not.
Which invariably means a TRIM command to figure out what is data, what is not, reshuffling the data and setting aside anything marked for garbage collection. And that'll happen with the boot process since bios will detect a hardware change and inform windows of such.

Using an external is using USB, which is hot-swapping, and uses a handshake to initiate the drive in windows through the externals controller.
 

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
Nothing gets done without OS say-so. No garbage collection, no TRIM, no searches, indexing, erasing, shuffling, nothing.
This Ars Technica article disagrees with you:
There is a barrier between the world of files and the world of blocks. The operating system doesn’t have any say in how an SSD’s controller does its job or which blocks the SSD controller uses to write data—the OS knows all about its file system, but nothing about the blocks underneath. Similarly, the SSD’s controller knows everything about what blocks are in use and what blocks are free, but it has no way of knowing which blocks correspond to which files.
Also, remember the info I just received from Kingston?
The OS can say “device disconnected’, but this doesn’t always sync up with the drives activity.
If the OS was in complete control, I don't think this situation could exist. But, I never rule out the possibility that I'm completely wrong. So, can you provide a link to a reference concerning what you are saying?
 
When storing an SSD with data on it, my understanding is that it must be powered up periodically to prevent data loss. The questions that I can't seem to find a definitive answer to is whether it needs to be connected to a computer when powering up, and whether it actually needs to be read by the computer. If this is true, must the entire drive be read?

I guess I missed this thread and I see now you have had lots of answers and research, but I'll still give my quick (and educated) take:

When you power on the drive it pushes the firmware and boot code to SRAM and does a variety of tasks. For example, if you had a power loss event with data-in-flight the drive will determine this and restore existing lower page values before proceeding. Another example is a sanitize operation - once started, the drive will not stop until it is complete, restarting on next power-on. A third example would be what you're asking about, in that it will check a status table to see if pages or blocks need to be refreshed (rewritten) which generally requires a RMW (read-modify-write) cycle and an erase. This requires a table because not all pages or blocks are the same, they have different characteristics such as voltage offsets, etc. Essentially over time you have something like current leakage that will impact the voltage thresholds which eventually will degrade the data beyond redemption. This means error correction, which starts with hard decoding, then soft, with read retries, and finally RAID parity in the worst-case. In any case, this leakage does not necessarily deplete voltage/charge so much as shifts the thresholds which is why simply repumping the charge (rather than rewriting) isn't always viable. I say not always because there are other types of wear that, when combined with this, actually improve retention, for example temperature, but that's a more complicated subject (which I may write an article on soon actually).

The SSD has a controller and flash translation layer for a reason: it abstracts itself to the host/OS. That's one reason you need a driver for TRIM for example. The drive can do garbage collection and maintenance without any system intervention, TRIM exists as an ATA command but drives also work with SCSI's UNMAP - my point being, the command exists for the benefit of the host/OS to command the drive (this gets more complicated with NVMe in enterprise for example, asynchronous I/O, etc). Functions like rewriting weak data can be done at that level as the controller will read these tables and sample the blocks as it tracks last-read time for example in this block table. Most SSDs have hybrid addressing, with logical page (or subpage) level at 4K while also having a block-level table for tracking wear. The reason for this is because erases happen at the block level and, for that matter, SLC caching (for example) is also at the block level. Wear-leveling is done at the block level as well. It's not true to say that all wear-tracking is done at the block level as you need to cycle based on page wear/errors for dynamic SLC, for example, but this is a different discussion.

This is not a super-thorough answer (even though it may look like it) because I've simplifying the mechanics, but one related point is of course "timing" - in fact timing is quite important with storage, which is why you'd need at least a BIOS check. However, modern SSDs are quite complicated and have many internal timers in fact. If you need specific detail on any area here, or related area, let me know...
 
Last edited:
Solution

tmpc1066

Distinguished
Oct 29, 2016
71
5
18,535
Well, I'm blown away by this info, Maxxify. I assume the link to NewMaxx on Reddit is you as well. I have been looking for detailed info on SSDs for a very long time. How did you come by all of this?

I am sure I will have questions, but it will take me a while to get through all of this to even know what to ask. So, thanks Maxxify.
Again, I'm blown away!!! o_O