on my main setup I have a 1TB blue WD and a green 1TB Green WD HDDs. (OS installed on a separate SSD)
sometimes while moving anything to/from the green WD, it would unmount my ReFS formatted volume (used to do the same with NTFS). on the green HDD
-it would appear unmounted in file explorer.
-disk management won't load unless I reboot first or deattach the disk's cable
-it appears connected in device manager
-it would appear in task manager as well with 100% activity.
after rebooting it would still refuse to be mounted and disk management would actually load up but it will show my disk uninitialized and would ask me to initialize it but that will return "internal I/O error".
Till now, the only way to get the disk to mount again is by leaving it disconnected off the power for a few minutes.
I'm pretty sure that this is not a system wide power problem as some sources say regarding "internal I/O errors" because if it was then it would have affected my blue HDD as well which does not show such behavior.
I checked crystaldiskinfo for S.M.A.R.T status but the all show that my disk is healthy and that no bad blocks are around.
Windows event viewer would show yellow triangle warnings with messages like these: here's an actual sample:
(The IO operation at logical block address 0x4eec4288 for Disk 2 (PDO name: \Device\00000035) was retried.)
event ID: 153
with one ReFS related error:
(The file system was unable to write metadata to the media backing volume E:. A write failed with status "The I/O device reported an I/O error." ReFS will take the volume offline. It may be mounted again automatically.)
event ID: 134
I know that ReFS would unmount my volumes to repair them automatically that's why I switched that one from NTFS to ReFS. that HDD has been acting like that for a while now, but nothing suspicious is showing in S.M.A.R.T status. (are smart status even reliable/honest/relevant anymore?? I've dealt with 2 HDDs that've been acting goofy yet they don't report anything negative in smart...)
OS: Windows 10 1809
sometimes while moving anything to/from the green WD, it would unmount my ReFS formatted volume (used to do the same with NTFS). on the green HDD
-it would appear unmounted in file explorer.
-disk management won't load unless I reboot first or deattach the disk's cable
-it appears connected in device manager
-it would appear in task manager as well with 100% activity.
after rebooting it would still refuse to be mounted and disk management would actually load up but it will show my disk uninitialized and would ask me to initialize it but that will return "internal I/O error".
Till now, the only way to get the disk to mount again is by leaving it disconnected off the power for a few minutes.
I'm pretty sure that this is not a system wide power problem as some sources say regarding "internal I/O errors" because if it was then it would have affected my blue HDD as well which does not show such behavior.
I checked crystaldiskinfo for S.M.A.R.T status but the all show that my disk is healthy and that no bad blocks are around.
Windows event viewer would show yellow triangle warnings with messages like these: here's an actual sample:
(The IO operation at logical block address 0x4eec4288 for Disk 2 (PDO name: \Device\00000035) was retried.)
event ID: 153
with one ReFS related error:
(The file system was unable to write metadata to the media backing volume E:. A write failed with status "The I/O device reported an I/O error." ReFS will take the volume offline. It may be mounted again automatically.)
event ID: 134
I know that ReFS would unmount my volumes to repair them automatically that's why I switched that one from NTFS to ReFS. that HDD has been acting like that for a while now, but nothing suspicious is showing in S.M.A.R.T status. (are smart status even reliable/honest/relevant anymore?? I've dealt with 2 HDDs that've been acting goofy yet they don't report anything negative in smart...)
OS: Windows 10 1809