Raid 1 danger, implementation failure

I can confirm a few tests of RAID1 “auto” setups seem to be very iffy. I think there was an EFI link to one of the raid disks, so when pulled, no boot. When booted without the other, ALSO FAIL. Yes, this means the rais! was less resilient than single disk, because a failure of the ineefective RAID1 partner disk = no boot. I’m testing all varieties of EFI/ BIOS/ and allowing for changing boot order, no good. Only config that works is original pair in RAID1, both working. pull either, no boot. It used to be ull the wrong one, no boot. So it was not great but better than single… So Sagnoma’s most recent implemenation of RAID1 is worse than RAID0- no performance gain, increased risk over single disk.

I now need to figure this out. I am sure it was possible/tested in the old days/distros. Prabably EFI was a bad mistake. Let boot order work, when disk 0 is yanked, 2nd choice should boot fine. This was easy with BIOS GRUB for decades.

I like the option of a USB boot…keep two or 4 onsite to try either part of raid. Until/unless Sagnoma fixes this disaster.

What a shame. I’m trying to get resilient bahavior in the lab but feel I’ll have to got to old distro or bail to alternative.

test of raid1 auto - ok

what distro version ?

ive used the raid for years , has always worked well for me

we’d need a lot more to go on here to really help - that said ill do some testing with the latest ISO to validate

Do you have a boot partition in both raid disks?

If not it will be hard to resolve as grub is not part of linux so you need something to boot from if the primary drive dies.

Ideally you have mirrored your boot partition which is software raid 1 with the primary drive.
and installed grub on both.

cat /proc/partitions
cat /proc/mdstat

(Dear God, here we go agian with uefi and Redmond !!! )

I’m on second attempt on raid auto setup…

I want to catch you guys so gimme 2 mins…

ultra appreciated.

so on second install, when I selected non EFI USB “choice” it did mea single disk, re-wiping.

Check with the os provider

Again

cat /proc/partitions
.
Needs to show identical fd partitions for raid1 booting and grub needs to be installed on both drives’ boot partitions usually /dev/md0 (efi/gpt is harder than legacy/msdos :wink: )

1 Like

I created a bug report but it is still not fixed. If you allow the installer to configure mdraid for RAID 1 instead of manually creating it, you will definitely end up with a non- working setup if any one of the disks fails, because the boot FS is on a partition that is created directly in only one of the disks. Even more, if the system is setup for EFI, the EFI partition partition is also created in only one of the disks.

I use software raid a LOT with small Linux servers (like NAS devices)

I have had the situation of a hard drive dying in such a way that it wedges the SATA bus and prevents the other drive from booting. I can definitely confirm you are correct in this. But you miss the fact that if 1 of the software raid disks dies, the other is getting close to dying as well.

Software raid should only be used for 1 reason - as an emergency, last ditch attempt to preserve files.

When I setup a NAS on Linux for a customer I add a dock and instruct the customer to regularly change disks. IF a disk dies I’m going to replace BOTH disks, rebuild the server, then restore files from backup. IF for some reason the backup is scotched or more likely the customer hasn’t been swapping backup disks or whatever, then I might rebuild the NAS with new disks then plug the remaining original working disk into the dock and mount it to get at the files. But this is an emergency procedure only and if I ever had to do it I would charge triple and warn the customer it might not work.

Keep in mind that with GPT it gets very dicey to properly set it up particularly with large disks. dmraid for example writes a metafile at the end of the disk, MBR only put the bootstrap loader on track 0 so they didn’t stomp on each other. With GPT it trashes the metafile since GPT puts a backup partition on the end of the disk.

You can buy an older ProLiant with a real hardware raid card in it and load it up with drives and get real redundancy. I personally would be very leery of putting an application server on software raid.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.