Need some tricks survive with BTRFS

Some short notes from the top of my head, as I don’t have time to elaborate.

  • The -t 300 thread was about a default non-Synology setup on a single HDD. As you cheerfully agreed, it’s not optimized for speed, having duplicated metadata and such.
  • To support all these fancy new features, btrfs metadata is, by necessity, larger. This results in bigger RAM usage for caching metadata, and as such, when you reach the threshold where you can no longer fit metadata in RAM cache, you fall off a performance cliff. I don’t know what’s the RAM-to-storage relation in OP’s case, nor I know what’s the threshold, sorry.
  • There is no SSD caching in kernel btrfs. There is in Synology’s implementation, probably based, to some degree, on device mapper/LVM. I did not use Synology in that thread. This is still a choice to OP though.
  • It really doesn’t help that OP runs it on SHR. Parity raids are slow to write. If this was btrfs’ implementation of a parity scheme, it would be faster—but we know btrfs parity modes cannot be trusted.
  • btrfs is crazy slow on synchronous writes, and there is no SLOG to help it, like with ZFS. I measured it. There is a pending experiment on disabling some synchronous writes in storage nodes.
  • File system design is full of trade-offs. You can support more features, but if these aren’t used by software (like storage nodes), you will still pay their price. ext4 is optimized for simple fs operations with things like collocated inode tables. This is a feature btrfs does not have, so you can’t claim btrfs is strictly better than ext4.
3 Likes