Question Windows 10 Spanned Volume Suddenly Missing (Dynamic Disks: "Invalid")

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Hi,

Normally I'd try quite a bit myself before posting here but I'm afraid to try much since data is on the line. As far as I can tell this seems to be a software issue rather than a hardware problem, though I can't be 100% sure. I say this because both drives involved are fairly new, still show up within the BIOS and device manager OK, and have no SMART errors reported, so its more likely the issue is more along the lines of partition corruption.

Also, I'd like to get this out of the way: I know I should have this data backed up, but unfortunately its a large amount that isn't cheap to backup and so I was waiting a few months until I could afford another set a drives to occasionally mirror these to. Annoyingly, barely anytime passed before this happened so it goes without saying that I did't get to the point where I could back these drives up yet.

I wish I could say exactly what happened, but unfortunately there is a small blank spot. Anyway, here is what I know happened:

  1. Originally had an 8TB drive (ST8000DM004) that I was storing quite a bit on that I bought a few months back (within this year)
  2. I needed more space for the same kind of data so I bought another ST8000DM004, popped it into the same SATA controller as the other 8TB drive, converted the existing disk and it into dynamic disks, and then extended the original volume into an extended/spanned volume across both drives for a total of 14.6TB of usable space.
  3. Moved some data around and set up a large task of extraction and compression on this expanded partition
  4. Left home for a few days while this was processing. Couldn't remote into my machine because of an annoying weird issue with RDC that I've had happen before if my machine has gone a very long time without reboots. So, I asked someone else at home to go to the PC, ensure it had finished its operations, and then let me know that was the case so I could give the OK to restart the machine.
  5. This is where the "blank" occurs. They misunderstood what I said and thought I wanted them to restart assuming the process was done WITHOUT confirmation to me. According to them everything up on the machine was just completion windows that they clicked through hitting "OK" on each and then restarted the machine. So in theory the drives were not in use and the machine was doing nothing when it was restarted, but I have no way of knowing for sure and really wish I knew exactly what was up on screen from the last time I had looked :/
  6. I remote into my machine no problem now that it has been restarted and immediately notice that the 14+ TB spanned volume isn't showing up.
  7. Check device manager and both drives are there, check CrystalDiskInfo and see no S.M.A.R.T oddities
  8. Opening Windows Disk Management shows both disks with no volumes/partitions and on left shows them with the little red and white down arrow and they each say "Dynamic" and "Invalid"
  9. Opening AOMEI partition assistance shows the disks marked as dynamic and both drives as "Unallocated"
  10. Opening Acronis Disk Director shows the following errors, the last of which I believe refers to the drives themselves:
o1v5PAF.jpg


I am hopping there is a way to repair the partition info that details how the volume is spanned across the drives, or at very least recover the info a bit at a time and off load it elsewhere (though to be honest I'm not sure where I can fit this much). I'm not quite sure but I believe this was the first time the PC was restarted since the second drive was install and the volume was spanned, so it could be there was an issue with how the drive info was initially written to the LDM database and now that the system has been rebooted Windows cannot enumerate the disks correctly due to a corrupted LDM.

Thanks for any suggestions. Desperate here

EDIT (More Info):
I'm mostly certain at this point this is some dumb (but challenging) issue with the volume/parition information being corrupted alone as I am able to open both disks in HxD and all of the data seems to be there and is not garbage data. Obviously I can't confirm 100% integrity of 12TBish of data but searching for various string values I know are in certain files on those drives does returns results and there are no errors when trying to access the drive.
 
Last edited:
ISTM that one critical structure is still missing. AFAICT, the initialisation process zeroed out the PRIVHEAD sector (the first sector) in the LDM Metadata partitition. Everything else in that partition seems to have been left untouched. Fortunately there are two backup copies of PRIVHEAD near the end of the metadata. ISTM that this PRIVHEAD copy should be reinstated by copying it to sector 260130.

However, I still think there is more to do. There appear to be 3 GUIDs in PRIVHEAD, and I expect that these should match the GUIDs in sector 2. I'm still studying this, though. The earlier LDM reference I linked to appears to be for MBR based dynamic volumes, not GPT, so we need to keep this in mind.
 

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Unfortunately that didn't cut it, though it did have some kind of effect. The drives now show up like this:
z854KJQ.png

cwPkWSl.jpg

V9cZrDE.jpg


I uploaded the current state of the LDM data to the same folder I mentioned previous if you want to double check it.

Tomorrow or Wednesday I'll see if I can get that other span going going so we can see if there is anything made clear from a working example. If not, I'll just have to figure out where I'm going to dump all the data one way or another.
 

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Moving your data to a safe space offline should be the first thing, not 'figure out later'.
Yes of course, I stated this in the op. Unfortunately I can't afford the amount of storage that would be required to backup what im storing on these drive at the moment. In time I will be able to, and ive known the whole way I've been taking a risk until then. That's just the way it is.
 

USAFRet

Titan
Moderator
Yes of course, I stated this in the op. Unfortunately I can't afford the amount of storage that would be required to backup what im storing on these drive at the moment. In time I will be able to, and ive known the whole way I've been taking a risk until then. That's just the way it is.
I get that.
The comment was also for other people who will come across this thread in years to come.
 
  • Like
Reactions: oblivioncth
Your results are as I expected. The data partitions are in the right place, but the LDM Metadata partition needs to be edited. There is a sector with a PRIVHEAD signature near the end of each 260130_2048 dump. This copy needs to be restored to sector 260130 (offset 0 in the dump).

You can see an example of the PRIVHEAD sector here:

https://stackoverflow.com/questions/8427372/windows-spanned-disks-ldm-restoration-with-linux

Note that DMDE has a Copy Sectors command, but be careful when specifying the destination.
 

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Your results are as I expected. The data partitions are in the right place, but the LDM Metadata partition needs to be edited. There is a sector with a PRIVHEAD signature near the end of each 260130_2048 dump. This copy needs to be restored to sector 260130 (offset 0 in the dump).

You can see an example of the PRIVHEAD sector here:

https://stackoverflow.com/questions/8427372/windows-spanned-disks-ldm-restoration-with-linux

Note that DMDE has a Copy Sectors command, but be careful when specifying the destination.

Gah, didn't seem to do anything. To confirm, I copied the backup of the PRIVHEAD sector from each respective drive at sector 262177 to sector 260130 of each respective drive. I did it directly using HxD to ensure I was using the correct destination. I wonder if it has to do with GUIDs no longer matching. Hoping that a working example proves useful.

EDIT:
I'm not sure about how much of this applies to GPT based partitions but here is some nice info on the PRIVHEAD Sector: https://www.apriorit.com/dev-blog/345-dynamic-disk-structure-parser

The FILETIME for the sector on Drive 4 converts to "Sunday, July 28, 2019 4:05:40am " which sounds about right for when I originally created the volume.

I was able to reorder some things I'm doing so I went right for the drives. I found two old 100GB sata drives that I'm going to make a spanned volume across and then examine them and dump all of the same primary sectors that we've lifted off of my borked drives.

EDIT 2:
You may have checked this information already, but...
Code:
ALL LDM DATABASE DATA IS BIG ENDIAN

DRIVE 4 @ 260 130

struct PrivHead
{
    uint8_t magic[8]; = "PRIVHEAD"
    uint32_t checksum; = 0x0000310F
    uint16_t major; = 0x0002
    uint16_t minor; = 0x000C
    uint64_t timestamp; = 0x01D544F9B828243F‬ ->  Sunday, July 28, 2019 4:05:40am (https://www.silisoftware.com/tools/date.php)
    uint64_t sequenceNumber; = 0x0000000000000003
    uint64_t primaryPrivateHeaderLBA; = 0x00000000000007FF
    uint64_t secondaryPrivateHeaderLBA; = 0x0000000000000740
    uint8_t diskId[64]; = "e8aa8e70-e6b3-b94c-97e0-46945532e999............"
    uint8_t hostId[64]; = "1b77da20-c717-11d0-a5be-00a0c91db73c............"
    uint8_t diskGroupId[64]; = "cf3cc96b-1b67-bc4e-ac4c-7fbece90ad40............"
    uint8_t diskGroupName[31]; = "TrueImageDg0............" (Strange size, 31 bytes)
    uint32_t bytesPerBlock; = 0x00000200
    uint32_t privateHeaderFlags; = 0x00000000
    uint16_t publicRegionSliceNumber; = 0x0000
    uint16_t privateRegionSliceNumber; = 0x0000
    uint64_t publicRegionStart; = 0x0000000000040022/262 178 <- THIS IS START SECTOR ON DISK 3!
    uint64_t publicRegionSize; = 0x00000003A37D2A6D/15 627 790 957 <- LARGER THAN THE REPORTED/CALCULATED SIZE FOR BOTH DISKS
    uint64_t privateRegionStart; = 0x000000000003F822/260 130 <- Start sector we used for this drive (Disk 4)
    uint64_t privateRegionSize; = 0x0000000000000800/2048
    uint64_t primaryTocLba; = 0x0000000000000002
    uint64_t secondaryTocLba; = 0x00000000000007FD/2045
    uint32_t numberOfConfigs; = 0x00000001
    uint32_t numberOfLogs; = 0x00000001
    uint64_t configSize; = 0x00000000000005C9/1481
    uint64_t logSize; = 0x00000000000000E0/224 (Thought this would be 64MB, must be another log)
    uint8_t diskSignature[4]; = 0x0FC7D4CF/264754383/".ÇÔÏ"
    Guid diskSetGuid; <- Unknown, data after diskSignature is all 00
    Guid diskSetGuidDupicate; <- Unknown, data after diskSignature is all 00
};


DRIVE 3 @ 260 130

struct PrivHead
{
    uint8_t magic[8]; = "PRIVHEAD"
    uint32_t checksum; = 0x000031AE
    uint16_t major; =  0x0002
    uint16_t minor; = 0x000C
    uint64_t timestamp; = 0x01D544F9B828243F ->  Sunday, July 28, 2019 4:05:40am (https://www.silisoftware.com/tools/date.php)
    uint64_t sequenceNumber; = 0x0000000000000003
    uint64_t primaryPrivateHeaderLBA; = 0x00000000000007FF
    uint64_t secondaryPrivateHeaderLBA; = 0x0000000000000740
    uint8_t diskId[64]; = "e6a400ac-b8bd-4b48-a53d-a3832ff994b7............"
    uint8_t hostId[64]; = "1b77da20-c717-11d0-a5be-00a0c91db73c............" <- Matches the ID on Disk 4
    uint8_t diskGroupId[64]; = "cf3cc96b-1b67-bc4e-ac4c-7fbece90ad40............" <- Matches the ID on Disk 4
    uint8_t diskGroupName[31]; = "TrueImageDg0............" (Strange size, 31 bytes)
    uint32_t bytesPerBlock; = 0x00000200/512 
    uint32_t privateHeaderFlags; =0x00000000
    uint16_t publicRegionSliceNumber; = 0x0000
    uint16_t privateRegionSliceNumber; = 0x0000
    uint64_t publicRegionStart; = 0x0000000000040022/262 178 <- Start sector we used for this disk
    uint64_t publicRegionSize; = 0x00000003A37D2A6D/15 627 790 957 <- LARGER THAN THE REPORTED/CALCULATED VOLUME SIZE FOR BOTH DISKS. Matches Disk 4
    uint64_t privateRegionStart; = 0x000000000003F822/260 130 <- MISMATCH BETWEEN SECTOR START WE USED FOR THIS DISK (publicRegionStart), AND SECTOR START WE USED FOR DISK 4 (privateRegionStart)
    uint64_t privateRegionSize; = 0x0000000000000800/2048
    uint64_t primaryTocLba; = 0x0000000000000002
    uint64_t secondaryTocLba; = 0x00000000000007FD/2045
    uint32_t numberOfConfigs; = 0x00000001
    uint32_t numberOfLogs; = 0x00000001
    uint64_t configSize; = 0x00000000000005C9/1481
    uint64_t logSize; = 0x00000000000000E0/224 (Thought this would be 64MB, must be another log)
    uint8_t diskSignature[4]; = 0x0FC7D4CF/264754383/".ÇÔÏ"
    Guid diskSetGuid; <- Unknown, data after diskSignature is all 00
    Guid diskSetGuidDupicate; <- Unknown, data after diskSignature is all 00
};
 
Last edited:

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Could it be that the sector ranges I entered at the beginning and end of the Drive should have been in Big Endian format? That had more to do with the GPT standard part of the file structure rather than the LDM, though it was under the LDM entry section, but its a thought.

More info:
* Note * - All offset regions declared after "privateRegionStart" are based upon it (privateRegionStart = 0x00 relatively speaking. i.e. "log" start is actually (260 130 + 1498) *512 = Offset 0x7FBF800‬ -> Sector 261,628: Signature "KLOG"

Described in more detail in that apriorit.com (if we really need them I may try to build those tools, this documentation seems excellent).

Code:
TOCBLOCK (SAME ON BOTH DISKS)

struct TocBlock
{
    uint8_t magic[8]; = "TOCBLOCK"
    uint32_t checksum; = 0x000008B8
    uint64_t updateSequenceNumber; = 0x0000000000000003
    uint8_t zeroes[16]; = 0x0000000000000000 (duh)
    TocRegion config; = SEE BELOW
    TocRegion log; = SEE BELOW
};


struct TocRegion (CONFIG)
{
    uint8_t name[8]; = "config.."
    uint16_t flags; = 0x0000
    uint64_t start; = 0x0000000000000011
    uint64_t size; = 0x00000000000005C9/1481
    uint16_t unk1; = 0x0006 (Unknown?)
    uint16_t copyNumber; = 0x0001
    uint8_t zeroes[4]; = 0x00000000 (duh))
};

struct TocRegion (LOG)
{
    uint8_t name[8]; = "log....."
    uint16_t flags; = "0x0000"
    uint64_t start; = 0x00000000000005DA/1498
    uint64_t size; = 0x00000000000000E0/224
    uint16_t unk1; = 0x0006 (Unknown?)
    uint16_t copyNumber; = 0x0001
    uint8_t zeroes[4]; 0x00000000 (duh))
};

I needed to back up the disks I was going to use for the working example spanned volume and one of them has a ton of bad sectors and cant be read well so i need to see if I have another. I think I may.

I also notice when I edit my posts too many times (damn typos) it gets hidden for moderator approval so sorry if our conversation has been a bit "hole filled" because of that. I think I may have missed part of one of your posts for the same reason.

EDIT:
It is worth noting that at least in the Partition Table MBR/GPT and FAT/FAT32/NTFS Bootrecord modes, DMDE doesn't seem to read the PRIVHEAD sector correctly. Also, I found another drive to use that seems good. First drive is already backed up (they have old stuff on them that a family member wants backed-up just in case) and zero'ed, second one is zeroing now. I will set up the volume tomorrow.

EDIT 2:
I built the command-line tool that was provided in that link. It isn't detecting either (or both together) drives as LDM volumes which I find interesting (is it simply a bug/regression? Or is it not detecting them as such because of what we are missing?) and I am stepping through the source to see if I can figure out why.

As for the example volume, both drives are ready to go, I'm just updating the machine to Windows 10 (been needing to for a while anyway) so that both machines are on the same OS, just on the off chance that they generate spanned volume slightly differently.

EDIT 3:
Ah, as I feared the tool is only able to interpret MBR disks. It seems the first thing the program tries to do after it determines a drive is an LDM based volume is read the PRIVHEAD and everything offset after that is based on what the PRIVHEAD contains. So, I'll see if I can just hard code in the correct sector for where the PRIVHEAD is on these drives and see if the rest works out.

I also found this: https://docs.microsoft.com/en-us/sysinternals/downloads/ldmdump , but it returns "Disk does not have LDM database", and since it says it runs on Vista and higher, I'm guessing it too is expecting an MBR disk. This one has no source so unfortunately its a dead end.
 
Last edited:

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Phew, man still lots to investigate, but I have the working example span setup and have dumped the bulk of the respective sectors that we have looked at on the corrupted span. This was done useing the same software I used to make 'Archive'. Since the drives were zeroed before hand I want to make sure I get everything that's on the drive since its 99% metadata (though there seems to be a few records of files on there that I haven't dumped yet, but they seem to relate to system volume information files like the $Logfile). What is super annoying is that no hex editor I can find nor DMDE have a function to advance to the next sector that doesn't contain just null bytes ('00') (seriously!?), so I had to manually scroll really fast near the beginning and end of each drive to get the data I have so far. I'm going to examine how the LdmParser accesses the raw disk and see if I can't use that API to write a quick command-line tool to do just that: You input a sector and it searches for the next sector of non-null data after that sector offset and allows you to do so until you hit the end of the drive.

TBH I can't believe this isn't a common feature as it is the only way I can ensure I've found every bit of data on a mostly empty disk. I supposed developers figure that most of the time you're working with disks that aren't primarily empty and that there is so much data in that situation that this function wouldn't be that useful; however, in this case such a function would be incredibly useful.

For now, I have uploaded the dumps I have of the working example along with a bit of info about the drives to the Google Drive folder. This time I set the files names in a way so that the dumps are in the same order that they appeared on the disk, which should make thinking through offsets a lot easier. Unfortunately, because that one drive had bad sectors and I had to use another they drives for this span aren't the same size (100GB vs 300GB) but I don't think that should matter.

The start and end sectors in the GPT Entries sector are definitely not supposed to be Big endian. Also, interestingly there is a discrepancy between the start sectors of the working example: In the example, the 3rd GTP entry on both drives (L D M d a t a p a r t i t i o n) has sector 262 178 marked as the start, while for the 'Archive' drives, drive 3 also has sector 262 178 marked as the start, while drive 4 as we know is different and has sector 264 192 marked as the start. 262 178 is also the value of "publicRegionStart" in the PRIVHEAD sector for drive 3 AND 4. This may just be a left over property of the fact that this disk was once Basic and was then converted to Dynamic and not actually mean anything, but it may very well be a mismatched value between the GPT LDM entry and the PRIVHEAD value. Finally, both Drive 4 and Drive 3 of 'Archive' are missing their protective MBR sectors. While those are important for programs accessing the drive, I'm not so sure this would have any effect on modern Windows' ability to read the drives correctly.

EDIT:
Got a test program that can read 512 byte sectors directly from the selected disk working so I'm going to make that utility I mentioned.
 
Last edited:

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Took forever because of an annoying issue with the OS reporting the disk sizes incorrectly but I finished the utility. I will now finish dumping all of the useful data from the working example volume and hopefully find any significant differences other than the few I listed already.

The tool is in the drive folder if it ends up being useful to anyone.
 
I'm thinking that the structure of the LDM Data partition may include a header from sectors 262178 to 264192, in which case we probably should not have edited sector 2.

I would have thought that LBA 34 would be the primary PRIVHEAD sector, but you don't appear to have found one??

BTW, I agree that searching for the next non-zero sector would be a useful feature. I often need to do this.

Elsewhere you mentioned the protective MBR. AISI, it should be present on both drives.
 

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Any more progress? If not, I'll try to analyse each of the data structures, if only to understand them better.
Been a tad busy, but I've become enthralled with trying to fix this volume manually now that I'm starting to understand the involved formats.

1) I started a doc in the root of the folder with a list of known potential issues that I (and feel free to yourself, more on that in (2) ) will update as we come upon more discrepancies or dig through details.

2) If you want to add/edit anything to/in the Drive folder I can PM you a share link with editing permissions or you can PM me a google account to share it with directly. If you'd rather just keep notes to yourself and post the significant ones here that's fine as well.

3) The utility I wrote had a couple bugs from a single conversion from unsigned to signed integers somewhere so I squashed that and uploaded the new version. It could use some polish and optimization but works really well and allowed me to make sure I combed over the entirety of each drive

4) With the help of the tool I grabbed a couple more dumps from the working example. Everything in the '0' and '1' 'Raw Data' folders should be everything on those drives that matters for their operation. For drive '1' I believe those dumps are actually every non-null section of data on the entire drive, aside from a couple patches of data that were just all high (0xFF) and didn't seem to be metadata or structure related. For drive '0' it is pretty much the same deal except that I did omit some sections of data that seemed to be files inserted by the OS that were well into the data section of the volume and didn't seem at all to contain any info relevant to the construction of the volume. I loosely marked around where that section of data was in case it is ever needed. There are also a few dumps of what also seems to be straight data but its stuff that occurred early in the volume and some of it looks like it could be hidden $SystemVolumeInformation files so I included them in case.

5) There is a file editor for a proprietary format in some old game I've been working on in my spare time before this happened and I've been bouncing back to that here and there to keep my mind fresh on what I was last doing with it. Otherwise, I now will start analyzing the dumps as well and probably focus on trying to make a bare-bones map with all the markers for where each section of the 'Working Example' span starts/ends so that we have something to compare to for the damaged volume. I may write a short utility that works in tandem with the other one that can lightly (mainly just break up) parse the key sectors we've discussed (GPT entries, PRIVHEAD sector, etc.) using the references that have accumulated in this thread, so that again, comparing between the example and corrupted spans is easier.

6) Yes, something is definitely up with sectors 262178 to 264192 on Drive 4 so I'll be sure to examine that as well in relation to the example and Drive 3.

7) The location of the PRIVHEAD sectors on the example span confused a lttile me as well since in someways their location was fairly dissimilar to the ones found in Drives 4 and 3, but hopefully the reason for this will be made clear once the offsets in each record are checked and compared and these differences could very well be part of the problem with the damaged span. The only thing near LBA 34 is the table of contents sector at LBA 36 on both Drive 0 and 1 of the example (I double checked). Otherwise the only PRIVHEAD blocks present were the ones at the locations you can see in the dumps I provided. I can search for them specifically in case if this proves to be a significant issue in terms of making sense of the dumps.

8) I will look them over and a do a little bit of research, but I have a feeling that the protective MBR sector employed in GPT based disks is not partition specific and I can just transplant the one from the example into Drive 3 and 4.
 
Last edited:

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
I'm assuming you expecting the PRIVHEAD sector to be much earlier than it is in the working example is based on this: http://www.ntfs.com/ldm.htm

Unfortunately it seems that this is only for MBR (as you mentioned could be an issue multiple times), which can be confirmed by the fact that their diagram shows the LDM metadata occurring at the end of the drive. This is commented on in the wiki article for LDM:
On a disk partitioned with the MBR Partition Table scheme, the Logical Disk Manager metadata are not stored in a partition, but are stored in a 1 MiB area at the end of the disk that is not assigned to any partition.

In the GPT version of LDM there is an LDM Metadata partition that is the first partition on the disk and is hidden. This contains a table of contents (TOCBLOCK) and holds the same data that would be at the end of the disk in the case of MBR. Somehow this points to the PRIVHEAD sector that is located deeper in the disk. I'm not sure exactly where yet and am trying to find info about this metadata parition but it seems that the GPT variant of LDM does not have nearly as much documentation online as the MBR version does.

I've mapped out a good amount of the working example under the document "Span Map" that is within the "Working Example folder on Drive, but still have a fair bit to go. I'm really hopping to find documentation on the GPT LDM Metadata partition so I can be more certain of things. For example, the GPT entry for the first partition on drive 0 says the first sector is LBA 34, but the TOCBLOCK for that partition resides at LBA 36. It would be nice to know if the first two sectors are known to be blank or reserved and/or have a name, instead of just noting those two sectors down as "NULL Data".

Lastly, this tool might come in handy as a last resort, or potentially for documentation reasons depending on how much information it parses. It seems fairly capable at a glance: http://www.rodsbooks.com/gdisk/
 
Last edited:

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Ok,

It took quite a while but I have finally finished completely mapping and parsing the metadata by hand of the 'Working Example' span. The process was very enlightening and I now have a much better understanding of how Windows handles dynamic disks in the GPT format. It is now very clear how the LDM meta-data partition is laid out and I found a lot of clues as to what might be up with with the 262 178 vs 264 192 issue on Drive 4.

I have reverted the changes we made to the GPT entries and deleted the "restored" PRIVHEAD copy and will now begin the same mapping process for the broken 'Archive' span, which hopefully won't take as long since I can copy and pate some things and am familiar with how the sections relate to each other.

The file "Span Map" in the "Working Example" folder has all of the details, complete with a full overview map that features links to each worksheet for every metadata subsection. There are also plenty of helpful notes for many of the entries.

Given this now clear and labeled data, as long as the VMDB on both Drive 4 and 3 are in-tact and original, as in not overwritten/corrupted by the initial damage OR anything windows might have done after the drives partially came back online after the changes we had made (Windows did make some changes to the GPT partition tables on drive 4 after I restarted that were incorrect, so it is a good thing we have backups in the forms of the dumps I've made), as recreating those hundreds of blocks by hand would be nearly impossible and take way, way to long, I am confident that the 'Archive' span can be repaired successfully. I say this because now that I've been through it all, other than some potential small issues with some IDs (there some 4-byte ones aside from the 16-byte GUIDs that I'm a little uncertain of) everything but the VMDB VBLK entries are easily manipulated and straightforward.
 

oblivioncth

Distinguished
Sep 5, 2012
34
1
18,535
Sorry for ghosting for a while, I've just been busy.

Good news and bad news.

Bad news is that while digging I began to realize that a small, but significant portition of the VBLK volume information had been overwritten. I attempted to make some changes to the entries to see if it was even feasible to try and recreate the missing ones but started getting strange errors in both HxD and DDME that I had never seen before (it has been a bit but it was something to the effect of "System IO Error: Invalid Operation", but stranger, I can't remember it exactly) when trying to perform basic edits I had in the past on Drive 3. Strangely, it was only occurring on certain sectors of the disk which perhaps may have been indicative of bad sectors.

In interest of making sure I didn't lose my data, I decided to cut my losses and try to recover my data onto any segments of spare drives/space I had.

Good news is that I was able to recover everything using the spanned volume./RAID function of DMDE as you had instructed previously and my machine is back up and running as it was before the incident.

While it may be anticlimactic that I never figured out exactly what happened to the drive, I learned a lot along the way and am glad I at least tried. I never would have recovered my data or had the momentum to dive into these structures as much as I did without your experience and direction, so for that I thank you.

Best of luck in your endeavors.