Greetings, I have a Debian server at home running a file server, jellyfin server, among some other things. I also had 4 external drives hooked up to it in a Raid 10 (or is it 1+0?) configuration. The SSD I had the actual server installed on failed overnight and looks to be beyond recovery. So my question is, when I install Debian on a new drive to replace the failed one, is there any method I could use to get the raid array to work with the new server without being rebuilt? From what I have found it looks like that is not possible but I just figured I would ask. The actual raid disks are fine, ironically they are about 8 years old while the SSD was only about 2 and it was the one that failed. No important data was lost, it will just be a bit of a pain to replace everything that was on the server if I have to rebuild the array and lose all of the data. Thanks in advance.
EDIT: Forgot to include that this was setup using mdadm
EDIT2: So it turns out this was not as massive a problem as I thought. I had assumed that since the server that set up the raid array with mdadm was lost, that I would not be able to get back into the array even though the data was still there. That was not the case. As soon as I connected the drives to the new server, mdadm recognized the array. It turns out one of the raid disks had also failed (no idea why but they are old), but I luckily have a spare one so I swapped it in and now have to wait patiently for 12 ish hours for the array to rebuild. In the meantime I got my file share back up and running and confirmed everything is accounted for. So provided there are no other random failures in the next 12 hours everything should turn out fine. Thanks all for your help. Now to get jellyfin installed and running again and I can get back to streaming the same shows over and over…
Assuming you were using a Linux software RAID, you should be able to recover it.
The first step would be to determine what kind of RAID you were using… btrfs, zfs, mdraid/dmraid/lvm… do you know what kind you set up?
To start the process, try reconnecting your RAID disks to a working Linux machine, then try checking:
- The sudo lsblk command will help you get a list of all connected disks, sizes and partitions.
- The partition tables on the disks, eg: sudo fdisk -l /dev/sda (that’s a lowercase L and /dev/sda is your disk)
- Assuming you use a standard Linux software RAID, try sudo mdadm --examine /dev/sda1. If all goes well, the last command should give you an idea of what state the disk is in, what RAID level you had, etc.
- Next, I would try and see if mdadm can figure out how to reassemble the array, so try sudo mdadm --examine --scan. That should hopefully produce output with the name of the RAID array block device (eg, /dev/md0), RAID level and members of the RAID array (number of disks). Let me know what you discover…
Note: if you used zfs of btrfs, do not do steps 3 and 4; they are MD RAID specific.
To check for MD arrays you can also just
cat /proc/mdstat
.Modern kernels will auto-sense MD arrays. If an the array is listed there with a name like
md1
it will tell you what partitions on what disks it’s using (likesda1
sdb1
) and if they’re both OK ([
means OK). If the array is currently rebuilding or recovering it will say that too and show a progress meter. You can also find a corresponding filesystem device ]/dev/md1
(or whatever the name is) which you can mount to access the files.Good point! I assumed the worst; but it’s possible the array is rebuilding or even already rebuilt and just needs to be mounted.
I forgot to include but yes this is a software array done with mdadm. Thanks for the steps! I will give these a go.
Is it a hardware raid or a software raid? If it’s software (not sure abt hardware), the discs themselves should have the array’s metadata on it, and you can just use mdraid & restart the array.
Since nobody mentioned how to tell if it’s hardware or software RAID:
- If you created the arrays from Linux, it’s software. You can install any Linux (doesn’t have to be the same distro) and you will be able to recover and access it.
- If you created the arrays from a special config tool started at boot, it’s hardware. That usually only happens if you have a dedicated RAID card in your PC. To recover such an array you typically need the exact same card model to be in your PC.
- If you created the arrays from the BIOS, it’s a bastard form of proprietary software RAID implemented by the motherboard. To recover such an array you need the same model of motherboard.
But you should NOT lose the array in any of the above cases. Losing the system disk doesn’t have any impact on them. In case (1) you can simply reinstall, and in (2) and (3) you only lose the array if the card or the motherboard die.
Great explanation. Yes - I’ve done this before! Built up a system with a RAID array but then realized I wanted a different boot drive. Didn’t really want to wait for dual 15Tb arrays to rebuild - and luckily for me, I didn’t have to! Because the metadata is saved on the discs themselves. If I had to guess (I could be wrong though) - I believe ‘sudo mdadm —scan —examine’ should probably bring up some info about the discs, or something similar to that command.
mdadm --examine
looks at the superblocks of all available partitions and prints information about the ones that are RAID arrays.mdadm --detail
prints information about running arrays.When added to one of the above,
--scan
will get any missing information from/proc/mdstat
or from/etc/[mdadm/]mdadm.conf
.The output of that command is also commonly used to populate
/etc/mdadm.conf
. That file is a way to fine-tune array assembly and add meta information, human-friendly names, alert emails etc. It is not a substitute for either/proc/mdstat
(which is maintained by the kernel directly) or/etc/fstab
. It can be very useful to create consistent reference points for the arrays, especially if you port them to another system, or reinstall.mdadm.conf
can be used to identify discs by block ID (instead of device names) and also give them custom names (instead of names like md3, where the kernel can issue different numbers on a different install).
Yes I forgot to include that this was done with mdadm, so a software array.
A ZFS does it all by itself, and no problem at all.
For a ‘real’ RAID it depends on the controller.
So you have one hard drive out of raid, and 4 hard drives in raid, and the out of raid drive failed. Is it software raid or hardware raid? If it’s software raid then you need to know the original configuration. If it hardware raid then you should be good to go.
Forgot to include it in the OP but it was done with mdadm so it is a software raid.