SparkyTech934

Reputable
Jan 22, 2020
107
14
4,665
Hi All,

I am running Windows Server 2019 Standard on my HP Z640 Workstation, and this morning I noticed that my machine had crashed/rebooted. Looking at the logs, it seems that it was a critical kernel-power error, so it must have been a temporary power outage or issue. per usual in these cases I do a once-over to make sure everything is good. SFC scan, HWiNFO64 check, etc.

I noticed that one of my 4 WD Gold Re HDDs (in a RAID 5 array with the other 3, formated as NTFS) is now reporting some warnings within the SMART values. These are as follows:
  • [C5] Current Pending Sector Count: 200/Always OK, Worst: 200 (Data = 15,0)

  • [C6] Off-Line Uncorrectable Sector Count: 200/Always OK, Worst: 200 (Data = 14,0)
My immediate response (besides being bummed about the drive needing to be replaced) was to shut down my VM that had data on a VHDX within the RAID volume, and begin a checkdisk scan. For reference, the actual command I issued was chkdsk D: /f /r

Now to the questions. Does the checkdisk operation cover my bases for recovering the data from these sectors? If I understand correctly, NTFS should be able to detect the corruption and repair it, yes? I of course don't know which sectors are affected, what data could have been on them, etc, but I would prefer to know that I recovered it. I do also run a windows server backup daily, so I am curious if I can 'compare' to the backup in order to recover the sectors if need be. I would love to hear some suggestions on how to verify what data was lost, and how I can officially recover it. I suppose that when I replace the drive, the RAID rebuild shouls also restore the lost sectors with parity data, correct?

I figure that the bad sectors won't really affect me too badly, but I'd like to use this as a learning experience and make sure I get the data back if I can. Thanks in advance for your suggestions!
 
Solution
With a RAID 5 and a single degraded drive, you simply take the system offline, replace the bad drive (with something of equal size), and let the RAID rebuild itself.

The rebuild may take approx 1.5 hours per TB.

USAFRet

Titan
Moderator
With a RAID 5 and a single degraded drive, you simply take the system offline, replace the bad drive (with something of equal size), and let the RAID rebuild itself.

The rebuild may take approx 1.5 hours per TB.
 
Solution