What this page offers over the others is a little better coverage of EFI issues, and how to deal with a failed disk.
We use Debian 9 (stretch) with a 4.11 kernel (from Debian testing) in this work. I use our concrete drive names here, so make extra sure to use your drive names when doing this on your own machine or you may destroy valuable data. All commands need root permissions.
btrfs device add /dev/sdb3 / btrfs balance start -dconvert=raid1 -mconvert=raid1 /We also add "degraded" as file system option in fstab; e.g.:
UUID=3d0ce18b-dc2c-4943-b765-b8a79f842b88 / btrfs degraded,strictatime 0 0The UUID (check with
blkid
) is the same for both
partitions in our RAID, so no need to specify devices.
dd
to copy the contents of the ESP to the other
ESP partition(s):
dd if=/dev/sda1 of=/dev/sdb1This has to be repeated every time the EFI partition changes, but it seems that this normally does not change, even when running
update-grub
. OTOH, it does not hurt to do
the dd
more often than necessary.
We also needed to change /etc/grub.d/10_linux in different places than "The perfect Btrfs setup for a server" (which seems to be written for a BIOS/MBR system) indicates: Search for " ro " (two occurences), and prepend "rootflags=degraded". One of these lines becomes
linux ${rel_dirname}/${basename}.efi.signed root=${linux_root_device_thisversion} rootflags=degraded ro ${args}In order for that to take effect, we had to
update-grub
You probably want to avoid these complications, and while you are
still in your first boot, you can. What we did (with the other disk),
is to convert it back from RAID1 to a single
profile,
then remove the failed device (which complains that it does not know
the device, but still removes it).
btrfs balance start -v -mconvert=dup -dconvert=single / btrfs device remove /dev/sdb3 #now check that it has worked btrfs device usage / btrfs fi show btrfs fi usageWe then shut down the system, plugged the replacement disk in (actually the disk we had earlier ruined by double degraded booting, after wiping the BTRFS partition), booted and then did the usual dance to turn the now-single BTRFS into a RAID1 again:
btrfs device add /dev/sdb3 / btrfs balance start -dconvert=raid1 -mconvert=raid1 /As a result, we had a RAID1 again.
If you wonder why we did not use btrfs replace
: We
would have to connect the new disk before the second reboot, which is
not always practical. With the method above, once we have rebalanced
the file system to a single one, we can reboot as often as we like to
get the new drive online.