Question I have a RAID1 array with one failed drive that I need to back up to a new drive, on Linux, what do you recommend?

thedonquixotic

Distinguished
Jun 13, 2008
214
0
18,690
I had the OS SSD fail, and then a motherboard fail, so now I've got an all new set up but I need to get my archival RAID1 set back up. One of the 2 drives appears to have failed (it's showing only 4gb partition in Gnome disk utility), and I want to rescue what I can from the remaining drive without potentially destroying or corrupting it. From what I can tell it is in tact, but I haven't gotten the RAID1 set back up in mdadm because I don't want to accidently corrupt it so I'm not sure, and I don't know how to inspect the drive's contents if it doesn't show up in Nautilus.


So I figure the safest thing to do is to clone the disk in total to a new disk and then try and see if I can inspect of the new disk etc, get it set up with mdadm or whatever. What would be your recommended tool to do this clone on Ubuntu 22.0.4? I have tried googling this but all I find are articles about how to clone a RAID1 set up (usually assuming some kind of OS image being cloned) or how to clone a RAID1 disk on Windows.
 
I had the OS SSD fail, and then a motherboard fail, so now I've got an all new set up but I need to get my archival RAID1 set back up. One of the 2 drives appears to have failed (it's showing only 4gb partition in Gnome disk utility), and I want to rescue what I can from the remaining drive without potentially destroying or corrupting it. From what I can tell it is in tact, but I haven't gotten the RAID1 set back up in mdadm because I don't want to accidently corrupt it so I'm not sure, and I don't know how to inspect the drive's contents if it doesn't show up in Nautilus.


So I figure the safest thing to do is to clone the disk in total to a new disk and then try and see if I can inspect of the new disk etc, get it set up with mdadm or whatever. What would be your recommended tool to do this clone on Ubuntu 22.0.4? I have tried googling this but all I find are articles about how to clone a RAID1 set up (usually assuming some kind of OS image being cloned) or how to clone a RAID1 disk on Windows.
If you want to clone the drive you can use rsync. Mount both drives in Unbuntu and copy the data. IIRC your command in terminal will be:
rsync -R <old drive path> <new drive path>

You should double check my doing a man rsync to make sure. However, this will make a block level copy to the new drive.
 
  • Like
Reactions: thedonquixotic

USAFRet

Titan
Moderator
I had the OS SSD fail, and then a motherboard fail, so now I've got an all new set up but I need to get my archival RAID1 set back up. One of the 2 drives appears to have failed (it's showing only 4gb partition in Gnome disk utility), and I want to rescue what I can from the remaining drive without potentially destroying or corrupting it. From what I can tell it is in tact, but I haven't gotten the RAID1 set back up in mdadm because I don't want to accidently corrupt it so I'm not sure, and I don't know how to inspect the drive's contents if it doesn't show up in Nautilus.


So I figure the safest thing to do is to clone the disk in total to a new disk and then try and see if I can inspect of the new disk etc, get it set up with mdadm or whatever. What would be your recommended tool to do this clone on Ubuntu 22.0.4? I have tried googling this but all I find are articles about how to clone a RAID1 set up (usually assuming some kind of OS image being cloned) or how to clone a RAID1 disk on Windows.
With a RAID 1 and a failed member, the remaining drive(s) should be able to be fully read.
That is the whole point of a RAID 1.
It reports as degraded, but it should still be fully operational.

Copy or clone to some other drive as needed.
 
  • Like
Reactions: thedonquixotic

thedonquixotic

Distinguished
Jun 13, 2008
214
0
18,690
this will make a block level copy to the new drive
Sorry what does this mean? I think I'm missing significance.

EDIT: okay yeah that's what I thought it meant. If so this could pose a problems since the new drive is 4 tb and the old drive is 3 tb.

EDIT2: Okay I think what I have to do here is block level copy the 3tb to the 4tb, but I can't mount the 3tb right now because it still think it's part of the raid1 and there's no mdadm config on this new mobo/OS installation and therefore I can't mount the 3tb drive (I think that's how this works...). So what I think I need to do is use mdadm to remount the RAID1, and once I do that I can grow the RAID1 to include the 4tb disk?

EDIT3: I apologize for so many edits. I'm not even sure where to get started with remounting the RAID1. I think it has something to do with mdadm and I'm googling but resources I'm finding are a mostly for Windows stuff.

Here's what my mdadm.conf says:


Code:
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=1a544a98:559ba1df:81b67841:7640bf5d name=lovelace:0

# This configuration was auto-generated on Sat, 21 Jan 2023 18:40:25 -0600 by mkconf

And this is what my disk partitions look like in Disks:

pyMlLmq.png


EDIT4:

Following this advice here I decided to run the following commands with the following outputs:


Code:
┌─      ~                                               
└─➤ mdadm --assemble --scan --verbose /dev/md0 /dev/sdb
mdadm: must be super-user to perform this action

┌─      ~                                                
└─➤ sudo !!
sudo mdadm --assemble --scan --verbose /dev/md0 /dev/sdb
[sudo] password for aslan:
mdadm: looking for devices for /dev/md0
mdadm: No super block found on /dev/loop16 (Expected magic a92b4efc, got b94e0a62)
mdadm: no RAID superblock on /dev/loop16
mdadm: No super block found on /dev/loop15 (Expected magic a92b4efc, got 89924e23)
mdadm: no RAID superblock on /dev/loop15
mdadm: No super block found on /dev/loop14 (Expected magic a92b4efc, got 89ab2deb)
mdadm: no RAID superblock on /dev/loop14
mdadm: No super block found on /dev/loop13 (Expected magic a92b4efc, got 3e22646e)
mdadm: no RAID superblock on /dev/loop13
mdadm: No super block found on /dev/loop12 (Expected magic a92b4efc, got 3e22646e)
mdadm: no RAID superblock on /dev/loop12
mdadm: No super block found on /dev/loop10 (Expected magic a92b4efc, got 08e76eab)
mdadm: no RAID superblock on /dev/loop10
mdadm: No super block found on /dev/loop11 (Expected magic a92b4efc, got 448ff4ed)
mdadm: no RAID superblock on /dev/loop11
mdadm: No super block found on /dev/loop9 (Expected magic a92b4efc, got 6042ae44)
mdadm: no RAID superblock on /dev/loop9
mdadm: No super block found on /dev/loop8 (Expected magic a92b4efc, got 6405001a)
mdadm: no RAID superblock on /dev/loop8
mdadm: /dev/sdb is busy - skipping
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 000005d3)
mdadm: no RAID superblock on /dev/sda
mdadm: No super block found on /dev/nvme0n1p2 (Expected magic a92b4efc, got 00000476)
mdadm: no RAID superblock on /dev/nvme0n1p2
mdadm: No super block found on /dev/nvme0n1p1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme0n1p1
mdadm: No super block found on /dev/nvme0n1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme0n1
mdadm: No super block found on /dev/loop7 (Expected magic a92b4efc, got 6405001a)
mdadm: no RAID superblock on /dev/loop7
mdadm: No super block found on /dev/loop6 (Expected magic a92b4efc, got c4b43b1a)
mdadm: no RAID superblock on /dev/loop6
mdadm: No super block found on /dev/loop5 (Expected magic a92b4efc, got c4b43b1a)
mdadm: no RAID superblock on /dev/loop5
mdadm: No super block found on /dev/loop4 (Expected magic a92b4efc, got 3a23b8f9)
mdadm: no RAID superblock on /dev/loop4
mdadm: No super block found on /dev/loop3 (Expected magic a92b4efc, got 3a23b8f9)
mdadm: no RAID superblock on /dev/loop3
mdadm: No super block found on /dev/loop2 (Expected magic a92b4efc, got 764c0e15)
mdadm: no RAID superblock on /dev/loop2
mdadm: No super block found on /dev/loop1 (Expected magic a92b4efc, got 764c0e15)
mdadm: no RAID superblock on /dev/loop1
mdadm: /dev/loop0 is too small for md: size is 8 sectors.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/sdb not identified in config file.
 
Last edited:
Sorry what does this mean? I think I'm missing significance.

EDIT: okay yeah that's what I thought it meant. If so this could pose a problems since the new drive is 4 tb and the old drive is 3 tb.

EDIT2: Okay I think what I have to do here is block level copy the 3tb to the 4tb, but I can't mount the 3tb right now because it still think it's part of the raid1 and there's no mdadm config on this new mobo/OS installation and therefore I can't mount the 3tb drive (I think that's how this works...). So what I think I need to do is use mdadm to remount the RAID1, and once I do that I can grow the RAID1 to include the 4tb disk?

EDIT3: I apologize for so many edits. I'm not even sure where to get started with remounting the RAID1. I think it has something to do with mdadm and I'm googling but resources I'm finding are a mostly for Windows stuff.

Here's what my mdadm.conf says:


Code:
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=1a544a98:559ba1df:81b67841:7640bf5d name=lovelace:0

# This configuration was auto-generated on Sat, 21 Jan 2023 18:40:25 -0600 by mkconf

And this is what my disk partitions look like in Disks:

pyMlLmq.png


EDIT4:

Following this advice here I decided to run the following commands with the following outputs:


Code:
┌─      ~                                               
└─➤ mdadm --assemble --scan --verbose /dev/md0 /dev/sdb
mdadm: must be super-user to perform this action

┌─      ~                                                
└─➤ sudo !!
sudo mdadm --assemble --scan --verbose /dev/md0 /dev/sdb
[sudo] password for aslan:
mdadm: looking for devices for /dev/md0
mdadm: No super block found on /dev/loop16 (Expected magic a92b4efc, got b94e0a62)
mdadm: no RAID superblock on /dev/loop16
mdadm: No super block found on /dev/loop15 (Expected magic a92b4efc, got 89924e23)
mdadm: no RAID superblock on /dev/loop15
mdadm: No super block found on /dev/loop14 (Expected magic a92b4efc, got 89ab2deb)
mdadm: no RAID superblock on /dev/loop14
mdadm: No super block found on /dev/loop13 (Expected magic a92b4efc, got 3e22646e)
mdadm: no RAID superblock on /dev/loop13
mdadm: No super block found on /dev/loop12 (Expected magic a92b4efc, got 3e22646e)
mdadm: no RAID superblock on /dev/loop12
mdadm: No super block found on /dev/loop10 (Expected magic a92b4efc, got 08e76eab)
mdadm: no RAID superblock on /dev/loop10
mdadm: No super block found on /dev/loop11 (Expected magic a92b4efc, got 448ff4ed)
mdadm: no RAID superblock on /dev/loop11
mdadm: No super block found on /dev/loop9 (Expected magic a92b4efc, got 6042ae44)
mdadm: no RAID superblock on /dev/loop9
mdadm: No super block found on /dev/loop8 (Expected magic a92b4efc, got 6405001a)
mdadm: no RAID superblock on /dev/loop8
mdadm: /dev/sdb is busy - skipping
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 000005d3)
mdadm: no RAID superblock on /dev/sda
mdadm: No super block found on /dev/nvme0n1p2 (Expected magic a92b4efc, got 00000476)
mdadm: no RAID superblock on /dev/nvme0n1p2
mdadm: No super block found on /dev/nvme0n1p1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme0n1p1
mdadm: No super block found on /dev/nvme0n1 (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/nvme0n1
mdadm: No super block found on /dev/loop7 (Expected magic a92b4efc, got 6405001a)
mdadm: no RAID superblock on /dev/loop7
mdadm: No super block found on /dev/loop6 (Expected magic a92b4efc, got c4b43b1a)
mdadm: no RAID superblock on /dev/loop6
mdadm: No super block found on /dev/loop5 (Expected magic a92b4efc, got c4b43b1a)
mdadm: no RAID superblock on /dev/loop5
mdadm: No super block found on /dev/loop4 (Expected magic a92b4efc, got 3a23b8f9)
mdadm: no RAID superblock on /dev/loop4
mdadm: No super block found on /dev/loop3 (Expected magic a92b4efc, got 3a23b8f9)
mdadm: no RAID superblock on /dev/loop3
mdadm: No super block found on /dev/loop2 (Expected magic a92b4efc, got 764c0e15)
mdadm: no RAID superblock on /dev/loop2
mdadm: No super block found on /dev/loop1 (Expected magic a92b4efc, got 764c0e15)
mdadm: no RAID superblock on /dev/loop1
mdadm: /dev/loop0 is too small for md: size is 8 sectors.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/sdb not identified in config file.
Have you tried to just mount the drive not use mdadm?
 
I have tried to using the Disk Utilities app but it doesn't give me the option. I don't think it can mount until it's been assembled by mdadm and it's currently being blocked from being assembled due to "superblock" issues.
Try running this command from the CLI to see if it will mount.

sudo mount /dev/sdb /mnt
*This should mount the drive to the /mnt volume. Note that you might need to make the /dev/sdb to /dev/sdb1 as there is probably a partition on the drive.
 
Code:
# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=1a544a98:559ba1df:81b67841:7640bf5d name=lovelace:0

RAID superblock formats:
https://raid.wiki.kernel.org/index.php/RAID_superblock_formats

Code:
Sub-versions of the version-1 superblock

Sub-Version       Superblock Position on Device

1.2               4K from the beginning of the device

To me, this says that there is a 4KB block of RAID metadata at the beginning of the drive, assuming it has been correctly detected. Therefore, DMDE should find a Linux volume at the 4KB offset.
 

thedonquixotic

Distinguished
Jun 13, 2008
214
0
18,690
I made a copy of the drive, and tried to run fsck to fix the new drive and got the following:

Code:
sudo fsck.ext4 -v /dev/md0
[sudo] password for aslan:
e2fsck 1.46.5 (30-Dec-2021)
fsck.ext4: Invalid argument while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
or
    e2fsck -b 32768 <device>


Can you show us the Partitions tab in DMDE?
I went to the website and the only thing they have for linux is a console program. It's in a zip, not sure how I should install it. Does it need to go in a specific directory or can I just run it by calling one of the sh files with bash?


Try running this command from the CLI to see if it will mount.

It's a RAID1 so I would need to mount md0 though right?
 
Last edited: