Question Cloning a Windows GPT disk using Linux and DD, questions about end-of-disk secondary partition table

Cyber_Akuma

Distinguished
Oct 5, 2002
456
12
18,785
I have a raid array that houses a Windows install I am trying to clone to recover, it's two 1TB SSDs that were setup in a 2TB RAID0 configuration and formatted as a GPT disk.

I managed to recover the array in a virtual way by using images of the raw disks and several tools in Linux. However, since this array only exists virtually this way I would need to clone it to a physical disk to actually use it (I need to be able to actually boot it on the original motherboard it came from to recover some data that is encrypted by my Windows install). So I tried to use dd for that to clone it to a 2TB HDD. I assumed that this would not be an issue as I had cloned the array to a single 2TB HDD before when I needed a copy of my install I would not have to worry about potentially damaging when I tested/experimented with software and tweaks, but would have issues if I ever needed to copy it back to the SSDs because the SSDs were apparently a tiny bit smaller, so I assumed a 2TB HDD would be slightly larger than the array and be able to clone it fine.

However, dd errored out saying it ran out of space with apparently one "record"... whatever a record means... to go: "1907730+0 records in 1907729+0 records out"

I am not sure if this means that once it ran out of space it stopped, or if it ran out of space just barely being one "record" short. I looked at the partitioning of the virtualized raid and saw that the last 400MB or so were unpartitioned space, so I assumed I would likely be fine since it seemed to have run out of space simply copying an unpartitioned area.

However, I was then told that apparently GPT stores a secondary partition table at the end of the drive, and it's possible this did not get copied when it ran out of space.

So first of all... this secondary GPT partition table, is it something like a backup in case the main partition table gets corrupted, similar to MBR having a secondary backup partition table and if it's missing the drive is still fine as long at the main one is intact? Or is it something necessary for the drive to be read properly and/or the OS to boot? And is it something Linux or Windows-specific? Or is having a second partition table at the end of the drive a GPT standard that every OS that supports GPT adheres to?

And second, what if I were to try to clone it to a 3TB HDD then? I would definitely have enough space to do it in that case, but, since dd just does a blind sector-by-sector copy, this would mean that it would just write this end-of-drive secondary partition table somewhere around 2/3rds of the way through the drive, not at the end of it, and I don't know if this would be the same as GPT not seeing a partition table at the end of the drive at all.... since it would not be at the end of it but slightly past the middle of it.
 

Cyber_Akuma

Distinguished
Oct 5, 2002
456
12
18,785
RAID 0 with NO backup of the data?

I had a backup, it was just a little older, and the data wasn't anything important, just more of a "it would be nice if I can just recover this as if nothing happened" situation.

What tool did you try for this clone operation?

I used dd to image each of the drives in the raid to an image file

Specifically, I used the command:
"sudo dd if=/dev/sdc of=/media/caztest/TempBackup/drive1 bs=1M oflag=direct"

with /dev/sdc being the USB dock I was I was reading the SSDs that made up the RAID0 from

I then plugged the second SSD of the raid into the same slot in the dock and ran:
"sudo dd if=/dev/sdc of=/media/caztest/TempBackup/drive2 bs=1M oflag=direct"

Then I used losetup to mount each of the images as a read-only block device
"sudo losetup -f -r /media/caztest/TempBackup/drive1"
"sudo losetup -f -r /media/caztest/TempBackup/drive2"

This mounted them as loop0 and loop1 respectively.

Then I used those block devices with mdadm in -build mode with manually telling it that it's a raid0 with a 128 stripe size in order to mount the raid itself:
"sudo mdadm --build --chunk=128 --level=0 --raid-devices=2 /dev/loop0 /dev/loop1"

I had to remember what stripe size I originally used when I set it up, at first I thought I had used 64, but that was wrong, then I tried 128 and it worked:

View: https://i.imgur.com/mbNTrMM.png


Please give us the whole backstory on this...

What drives, how was this RAID instantiated (hardware, software?), what is your end goal, etc, etc...

Basically, it was an Intel raid setup by the motherboard, it ran off two Samsung 840 EVO drives. The system it was setup on died (mostly... long story) so I am trying to see if I can recover the raid in a safe way. I was told that just simply putting the SSDs in an intel board that was the same or newer chipset would work, but I didn't want to risk that being wrong and somehow wiping my raid. So I instead imaged the separate disks themselves and mounted those images into a read-only raid without even having the drives connected after imaging them. Now I just need to clone that virtualized raid to a physical disk.

You're kind of focusing completely on the raid. The raid itself isn't the issue, I got that running and mounted just fine as described above, my issue is with how this whole GPT/end-of-disk-partition-table works and how to properly clone that.
 

USAFRet

Titan
Moderator
You're kind of focusing completely on the raid. The raid itself isn't the issue, I got that running and mounted just fine as described above, my issue is with how this whole GPT/end-of-disk-partition-table works and how to properly clone that.
Well, a regular procedure would be to instantiate a new RAID array, and recover the data back into this.

Rather than trying to rebuild in totality whatever happened with the original, incl data.
 

Cyber_Akuma

Distinguished
Oct 5, 2002
456
12
18,785
Problem is the raid was a bootable drive and it's important that I be able to boot it again, not just recover it's data, which I already did.

It's not important that it remain a raid, I just want to be able to boot into it. I was actually hoping to clone it to a single SSD when I was able to afford one big enough because I was tired of dealing with it being a raid.