Question An HDD on my PC shows up as RAW at almost every startup after using data recovery software, and persists even after a fresh Windows install ?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
May 9, 2025
7
0
10
So I ran "TestDisk" 5 days ago, wanting to recover data on my HDD, D drive. Did not get it but I noticed afterwards on reboots the D drive was missing.

I had to go into disk management to assign a letter to it.

Well I did a fresh Windows install and formatted my PC and the problem still persists!

How is this possible? What could be my issue?
 
Reading further, I guess SMR drives don't actually go and "erase" the blocks that have been marked by TRIM the way SSDs do. The controller just records that the block is cleared for re-use where it otherwise would not re-use it because of the SMR format, as magnetic recording doesn't need to physically empty the bits to maintain performance like an SSD does. However that should mean that software that scans the drive bit by bit can recover that data, without needing professional tools or specialty software, as the SMR drive isn't re-mapping the physical sectors like an SSD.
When the OS requests the data in an LBA that has been TRIM-ed, an SMR drive instantly responds with zeros. It doesn't even bother to seek to the relevant sector. That's the way that WD's SMR drives appear to behave. If you want to see this for yourself, format an SMR drive and then run a HD Tune read benchmark against it.

The reason for this behaviour is that an SMR drive would normally need to rewrite shingles whenever a sector is modified. However, if the shingles have been TRIM-ed, then they don't need to be rewritten.

Here is an example:

View: https://i.imgur.com/uB4IwcK.png
 
Last edited:
  • Like
Reactions: evermorex76
When the OS requests the data in an LBA that has been TRIM-ed, an SMR drive instantly responds with zeros. It doesn't even bother to seek to the relevant sector. That's the way that WD's SMR drives appear to behave. If you want to see this for yourself, format an SMR drive and then run a HD Tune read benchmark against it.

The reason for this behaviour is that an SMR drive would normally need to rewrite shingles whenever a sector is modified. However, if the shingles have been TRIM-ed, then they don't need to be rewritten.
Ah, so, the controller won't even access the physical data. But then, how does the drive know which blocks HAVE been TRIMed? From what I understand, the OS is the one that tells the drive which blocks need it done. Do SMR drive controllers themselves keep track of the ones that have been "erased", removing a sector from that list every time data gets written to one? Would be a bit of overhead that a CMR drive controller doesn't have but would certainly be worth it to eliminate needing to actually zero the sectors. Seems like adding it to CMR drives could slightly help performance, too, though.

I assume SSDs must do that too, since an "erased" block is considered neither 0 nor 1 but is physically a 1 in the ground state.

However that still depends on the TRIM command having been issued, so there is still a window where the data could be accessed after deletion, and that window is likely wider than it would be with an SSD due to the other background maintenance/management activities that an SSD controller performs.

Seems like that benchmark ought to basically max out the SATA interface if it's just reading data directly from the controller. It's faster than a physical read would be even streaming sequentially on CMR drives, but still seems like the controller interface is a bottleneck not capable of taking full advantage of the SATA 6Gbps specs.
 
For a HDD keeping track of such things isn't required for performance, since the write speed to randomly filled or zeroed sectors would be exactly the same. SSDs need TRIM to keep write performance high.

This is all getting rather off-topic as it doesn't matter how the drive should behave, it is behaving oddly so escalating to a more thorough next step only makes sense. I mean assigning a letter only to have that letter disappear on reboot and even go RAW is not how it's supposed to work. OP is also no longer trying to recover data, only the use of their drive.

The manufacturer's zero-fill utility may also work a bit differently or at least be more aware of quirks in their own disks than 3rd party utilities using standard SATA commands.
 
For a HDD keeping track of such things isn't required for performance, since the write speed to randomly filled or zeroed sectors would be exactly the same. SSDs need TRIM to keep write performance high.

The manufacturer's zero-fill utility may also work a bit differently or at least be more aware of quirks in their own disks than 3rd party utilities using standard SATA commands.
The HDD keeps track of TRIM-ed sectors for write performance reasons, as I have already explained.

Zero-fill involves filling every sector with zeros. A secure erase would do the same thing by way of a single standard ATA command. I can't see how there could be any vendor specific way to improve upon this.

Edit:

It would be possible to TRIM the entire drive. The drive would then serve up zeros for every sector. That would probably take less than 1 second.
 
Last edited:
The HDD keeps track of TRIM-ed sectors for write performance reasons, as I have already explained.
Yes of course it must keep track of what has been TRIM-ed, but evermorex was specifically asking if it also needs to keep track of what has been erased/zeroed.

Let me rephrase that to be clear: SMR drives need the TRIM but not the zeroing, since unlike SSD the write performance would be exactly the same whether those sectors are filled with zeroes or not, and is only faster because the TRIM allows the SMR drive to ignore and thus not have to rewrite any data adjacent to the target sector in all of the overlapping shingle rows above it--that is, the TRIM simply marks those areas as available and the drive will pretend/report they are filled with zeroes. Therefore there should be no need to keep track of sectors that have been TRIM-ed but not erased, only whether they have been TRIM-ed at all.

Each drive manufacturer does different things with the service area which probably shouldn't be touched with a zero-fill but we know sometimes it is. Some will put the firmware in there, and some will write the SMART data there--which is handy for those drive "refurbishers" if power-on hours are easily cleared without having to flash a new firmware. This is of course a cat-and-mouse game involving some rather poorly documented features + just recently after people started to cross-verify authenticity of Seagate drive hours with their FARM logs, some tools to write to the FARM logs have become popular.
 
Yes of course it must keep track of what has been TRIM-ed, but evermorex was specifically asking if it also needs to keep track of what has been erased/zeroed.
how does the drive know which blocks HAVE been TRIMed?
Do SMR drive controllers themselves keep track of the ones that have been "erased"
Quotation marks matter. I didn't mean physically erased, just recorded as such since at that point I'd learned that it doesn't physically zero/erase the sectors. I guess use of the term "erased" shouldn't be used when talking about SMR drives (or even physically zeroed CMR drives, but it would apply to the higher-level of the data itself) but it's easier than typing TRIMed or TRIM-ed or TRIM'd. But keeping track of those TRIMed sectors does add some small amount of overhead to the controller compared to a CMR drive, which doesn't keep track of anything at all since knowing whether an address/sector has data in it is a filesystem/OS level thing.

Also shouldn't call it zeroing when referring to an SSD, since it's physically a 1 with NAND flash, and it's not being done in order to "format" the sector for the filesystem. It's being erased as it's being set to "neither 0 nor 1" logically even though it would be detected as a 1 if it was read physically, and the SSD controller also needs to keep track of TRIMed blocks which are now available for use in re-mapping logical blocks. Different reason from an SMR drive but same sort of performance effect.
 
Would be a bit of overhead that a CMR drive controller doesn't have but would certainly be worth it to eliminate needing to actually zero the sectors. Seems like adding it to CMR drives could slightly help performance, too, though.
I merely agreed with your bit of speculation here for SMR drives. CMR drives though can write directly to a sector without affecting nearby ones, so don't have to read and then rewrite those too which is what slows down SMR drives--so TRIM shouldn't be needed for CMR (sectors are already marked "available" by the filesystem). Similarly, flash drives can only block erase so writes slow way down if there aren't enough free blocks because then each write would require a read first, then rewriting the entire thing with the modified data. With free blocks available it only needs to write.

I have a lot of old SSDs/SATA controllers/OSes that do not support TRIM, and writes eventually slow to a crawl as in much worse than any HDD. When that happens, I use a commandline utility to fill only the free space and the performance instantly returns afterwards. Both sdelete and Windows' own cipher command can write either zeroes (0x00) or 255s (0xFF) and I have always selected all zeroes which works on every SSD I have. While some SSD firmware might instead consider all 1s to be empty, all that I have run across have been smart enough to at least see all 0s as empty too.

And yes in SLC usually 0 is fully charged so ultimately leakage should turn everything to 1s. Things are way more complicated in TLC and QLC where multiple voltage threshold levels exist so values can leak down to different data states than just 1 and it's up to error-detection to determine if things have got corrupted.
 
Both sdelete and Windows' own cipher command can write either zeroes (0x00) or 255s (0xFF) and I have always selected all zeroes which works on every SSD I have.
That seems odd to me, because without TRIM the SSD controller itself should consider those cells as "used", because those cells were written to by the OS rather than being "erased" by the controller. They aren't recorded by the controller as "free", because the OS never tells the SSD to mark them that way, and if data later needs to be written to them the controller has to assume that there was still old data in them and go through the erase-then-write process even if it was technically already uncharged (1) but definitely if it was already charges (0). Even if the controller re-maps that logical address to a new block, this would happen the next time it needed to use that original one unless its own garbage collection routine ran and erased blocks not currently mapped to logical addresses (spare/overprovisioned being available).

In fact I'd expect that running sdelete or cipher on free space would actually be forcing the controller to remap as many of those logical blocks as possible to un-mapped blocks to write the new data (with every pass, new blocks being used), resulting in not just the original ones but the new ones having data in them that would not be erased the way TRIM would do it. Then it would be up to the in-built garbage collection to go back and erase the blocks that weren't currently mapped (therefore making them available for fast writes after that point). Performance would jump at that point, but it ought to take potentially hours depending on capacity (and causes more wear but on those older ones without TRIM there's no choice). If garbage collection is working properly and given enough idle time to do it, all that ought to be happening in the background anyway, and if it wasn't working right before I don't know why it would suddenly work just because you wrote all that data.
 
It is completely weird, isn't it? You'd expect writing new data to the SSD would mark those sectors as the freshest possible and therefore not something to be discarded, but these utilities actually both fill all the available space and then delete the giant file it made afterwards.

Perhaps if the drive knows an entire block has been deleted, then its contents can all be placed into a pool to be bulk erased by garbage collection later when the drive is idle, no TRIM required--although a TRIM command to explicitly instruct the drive to discard the data for sure may hasten this process. The OS filesystem has no clue on how the drive is arranged internally but if a file is altered, it gets written to a new page and the old version marked as stale. It's more complicated if a file is deleted as the OS merely marks the file system clusters occupied by the deleted file as overwritable, only updating the Master File Table and metadata while leaving the file contents itself on the media--but of course the OS can then send a TRIM command to make sure the drive knows they are marked to be deleted, or else schedule a periodic entire-drive TRIM to prune expired data.

The problem of course is that an SSD can write pages but cannot erase only pages--only entire blocks of hundreds of pages.
When a page needs to be written, it will be first written to a free block and if there are none, an entire block of stale pages is (slowly!) combined and erased to make room and any good pages that were in those are combined with the new page and written into the one block. Garbage collection's job is to find the oldest stale pages as the SSD can know when the pages were last written to, but if all available pages now contain stale data that is the same age, then perhaps the drive is smart enough to just treat all the stale data as equivalently deletable/available.

Garbage collection works behind the scenes to compact all of the good pages into blocks and stale pages into their own blocks, so if needed the blocks made entirely of stale pages can be quickly erased if needed without having to slowly move a bunch of stuff around later. Without TRIM it can't be entirely sure if the pages are actually valid or stale so it's reasonable to treat the stale pages modified the longest time ago as least valuable. TRIM makes things way more efficient because then Garbage Collection can erase stale pages right away instead of moving them around in the background just-in-case.

Technically, the discarded data may actually remain on the SSD as the Garbage Collection needn't zero it out if the entire block is marked for erasure since that can be done quickly. However the TRIM command unlinks pages in the block from the LBA to PBA map/table so the SSD no longer knows where things that were in there are, and any query about unmapped sectors immediately returns all zeroes without even attempting to read the drive, much like the TRIMmed SMR disk. Unlike the HDD though, even the factory probably couldn't reassemble files from the tiny pieces scattered all over the flash.

If you think about it, modern firmware may even be smart enough to see you are writing all zeroes and refrain from actually storing them, only marking a LBA range as filled with zeroes, essentially compressing the data since all zeroes compresses rather a lot. As you say, the conundrum is TRIM unmaps all LBA to PBA while zero-filling "should" map it all, opposite of TRIM. So the true answer is we don't know, but some drive engineer who worked there probably does. That it works though, is probably more useful than how it should or shouldn't work.

Of course now we've entirely derailed the thread and poor OP is wondering what this has to do with their HDD. I wonder why they haven't been by recently.