Doh.. Looks like hardware failure coming up!

I would not highlight this feature as zfs-only.
You can do that on any nowadays FS/LVM which supports RAID.
I would say that btrfs is working much faster than zfs and even pure ext4. But their RAID is a mess. And btrfs is still not production-ready. I would strongly recommend to avoid it in the next 5-10 years (accordingly their velocity on fixing a major bugs).
I managed to lost data in the test environment on btrfs single during expand, what’s never happened neither with zfs or LVM. I made a test emulates the need to move data from NTFS on single disk to btrfs/zfs/ext4(LVM) in-place for these three competitors:

  • btrfs
  • zfs
  • LVM + ext4

I was finally able to finish the test with btrfs in the second round, and did not lose data this time, but as we say, “we found spoons, but the sediment remained”, which means that something was lost when someone was your guest, but the lost items were found later (because they were elsewhere) and your guest was not to blame for the loss as you thought at the time.
In this context - I had to add more disk space (add another small empty partition) to complete the move without losing data. But for now I will remember that btrfs is extremely fragile and not ready for use.

But facts are: the fastest software RAID1 is LVM, then btrfs, then zfs.
However, the checksums and automatic fix of errors on the fly (but not automatic repairing of the disk) is a nice feature.
With emulated hole in one disk the LVM has lost data (I were not lucky and it decided to take exactly broken disk as a master), the zfs and btrfs are was succeed.
The fix is worked on zfs and btrfs pretty good.

The missing disk is a total failure for btrfs - it was unable to mount the mirror. I’m forced to use a special flag to mount it in degraded state. So if your root on mirror btrfs - you should prepare the boot from the LiveCD to fix an issue.
zfs and LVM was winning in this test.