Migrating a node

btrfs is a fine fs, when configured properly. There is nothing wrong with it inherently, it’s far superior to ext4 in pretty much every respect, both features and performance. I saw an article somewhere (could not immediately find) where ext4 developer recommended btrfs for all new deployments, not ext4.

So making btrfs default is a right move.

But Synology also pushes idiotic configs - like BTRFS overlay over RAID6 madam raid on 4-disk array, on 2 GB devices – this has no chance of working with any decent performance. In this case, ext4 woudl be much more preferable – not because it’s better, but because on such a constrained system btrfs is just choking, and choking btrfs is worse than freely breathing ext4. That’s the actual problem, not btrfs itself.

may be you share your knowledge’s and put some recommended settings and and ways how to cook it in proper way?

1 Like

Agree that the new fs are better than the old ones, but only if you use those features. Here we just try to find the sweet spots for storagenodes, and as I see it, simpler basic fs like ext4 with no extra toppings are the way to go.
I always choosen ext4 on Syno, even when I had no nodes, just because it says “more compatibility”, but I never tried to understand all those extra features other fs offers and I stay away from RAID. I imagine that in case of a system crash, I can take the drive out and recover the data on a PC, with older simpler solutions, like basic ext4, no RAID.

I no longer use Synology, so I won’ be able to test what I write, and therefore it won’t be that useful, but many recommendations for ZFS apply to BTRFS as well, especially when it comes to storage node – stuff like disabling atime updates and turning off sync.

From memory, specific to btrfs, you might want to mount the volume with space_cache=v2 and compress=zstd flags, and rebalance the trees periodically (with btrfs balance). Obviously, don’t use raid6 on arrays of fewer than 20-30 disks, and if performance is paramount – raid0, 1, and 10 will behave better than raid5. This is however not as important as proper caching and mount configurations.

Attaching sufficient amount of bcache to fit metadata is helpful; further you can set nobarrier flag to reduce flushes, as long as there is power protection available on the storage devices.

“Better performance” is one of those features, “better space efficiency for small files” is another. Point in time consistency (snapshots) and send/receive functionality. There is no reason to use ext4 when btrfs and zfs is available, supported, and enough resource are available.

This would be pointless. Running storagenode in isolations is neither viable nor realistic usecase worth optimizing: you are supposed to usitlize unused resources on the existing arrays. Which implies you already have an array configured to perform job you need it to perform in the optimal way. And you then share some space from it with storj.

So you cannot change array topology, but you can change filesystem features on subvolume/dataset granularity (there is no such thing with ext4) and control other filesystem options with mount parameters, specific to the filesystem.

Well, if you decide not to look into it – you would not know what you are missing :).

This is not just about uptime – any raid will have some protection against disk failure, but modern filesystems like btrfs and zfs not only protect agains cases where disk is reporting failure, but also from cases when it fails silently (bit rot), via checksumming.

Plus infinitely better flexibility allows you to tailor your storage performance to a specific task, on dataset/subvolume level. It’s very powerful.

Unfortunately, but not surprisingly, synology just plops it on the device, writes an awesome marketing story and then we get people on forums saying that “btrfs is not recommended and slow” as a result. It’s a synology problem, not btrfs.

Interesting, TIL there is a “btrfs send” command. Does it require that the receiving file system be btrfs as well? In my case I was attempting to migrate to a disk that was ext4 filesystem so maybe that wouldn’t have worked for me.

The old btrfs disk had been used for generic home file storage/backups/etc, and so btrfs was fine, but as the space was gradually consumed by storj it seemed like just having ext4 on the new home disk would be more appropriate. If a disk is just holding storj and chia, I don’t need it to be fancy, I just want it to be fast.

and incidentally the second transfer of 5.5tb really did go way faster than the first transfer of 2.5tb (they each took about a week, despite size difference). So I attribute that speed up to… something, I would guess doing the btrfs defrag in advance while I had a week to wait before starting… why the ‘uh oh’ for that?

That’s an interesting recommendation.