Well first of all its:
Debian Bookworm Root on ZFS — OpenZFS documentation
not the link you put up.
Secondly, that probably isn’t going to work on most machines out there with UEFI because it’s not enough to just create a UEFI partition with
sgdisk -n2:1M:+512M -t2:EF00 $DISK
the machine’s BIOS must be updated to see the UEFI partition. The Ubuntu installer does this, at least it’s done so on all of the machines I’ve run this on. I don’t see anything in the Debian instructions that does. It MAY work if you are willing to futz with it enough - but the litmus test of any of these setups is Can You Shut Down The System, Remove A Hard Disk, And Turn On The System And Have It Boot?
I will note that the Ubuntu mdm setup allows for a hot spare in the mirror. So, if you setup a hot spare, AND a disk fails but does NOT take out the SATA bus, in theory the mdm software will switch over to the hot spare disk and the server keeps running. I didn’t see anything like that in that zfs arrangement instructions.
There is also something to be said for having the “poor man’s raid mirror” boot setup INTEGRATED WITH THE LINUX INSTALLER so that all it takes it 5 minutes selecting the advanced disk partitioning, setting it up from the character-mode curses-based installer that you can just tab around to the different selections with, instead of that mess of instructions of forklifting ZFS into the install.
In reality, most of the time I’ve dealt with these kinds of “cheap-raid” systems 1 of 2 things happens:
-
A disk fails, shorts internally on the power or SATA bus, and the system now won’t see EITHER disk. You then have to unplug each successive disk until you find the bad one. Although, USUALLY it’s disk 0 - because in these mirrors, while WRITES happen to both disks, READS generally only happen to the FIRST disk so that disk gets more use. Then the remainder disk has a copy of the system - but almost always, you are not going to have a duplicate of the hard disk available - so you have to boot the good disk. Either by changing it from SATA position 1 to 0 or changing BIOS to boot off the second disk.
-
A disk fails with bad sectors or crashes and the mirror then unmirrors but the failing disk does not take out the SATA bus. If the server does NOT reboot you are OK. If it does, it may not come back up if the boot disk that’s failing is disk 1
ALL of these “cheap raid” setups really have to be regarded as the following:
This is a last-ditch, emergency cover-your-ass thing. 90% of the time the disk is buried in the server chassis, so you have to shut the server down to replace the failed disk. Even if it’s in a hot swap tray because it’s connected to the same SATA bus, a removal and replacement will throw electronic garbage on the SATA bus and the OS will panic.
The value of this setup is if at 3am a disk takes a dump and goes offline, the server won’t shut down. Obviously, you have to write some kind of notification script telling you that the mirror is having a problem, but as long as the server stays up running on the surviving disk - you can take your time, order 2 brand new disks, schedule downtime, then make backups and shut the server off and replace ALL disks - because by then, the disk that hasn’t died is very close to dying.
I used to do this with the FreeBSD ide/sata wd driver which had poor-man’s cheap-raid integrated into it very successfully. It works OK with the mdm and dmraid drivers, once more because the mirror function is separated from the filesystem driver. From the filesystem’s POV, it’s all just 1 big disk, the mdm driver handles the ickness of syncing 1 disk to the other. This is why EFI on that kind of a setup ISN’T a single point of failure because there’s only 1 disk - the logical mirror.
In general, a poor-man’s cheap-raid-mirror is really only usable in a low-cost setup like a lab or a small business that can afford a couple day’s downtime if the server dies. For a production system that needs to look at 9 9’s you really need to have a hardware RAID card in there so that if a disk fails you can eject and replace it and have the hardware RAID card rebuild everything without the operating system even aware of what’s going on.