Migrating a node


Better saying to use a “Better File system”. Meaning, not to use BTRFS…

1 Like

btrfs is a decent file system, just not for Storj purposes. It’s hard to compete with ext4 if your use case doesn’t actually need any of features specific to btrfs. And, actually, if there was a version of ext4 with no support for hardlinks (instead keeping inodes directly in the directory tree), it would be even faster for storage nodes.

1 Like

Sometimes I think that it would be interesting to see how each FS is performing in case of storagenode, ext2, ext3, ext4 and xfs. I tested only COW FS like BTRFS and ZFS and was pretty disappointed about performance, so far only ext4 is performing better.

1 Like

I just tested a bunch of file systems after having issues with my node. In my experience the best file system to use currently is BTRFS with pinned meta data on a large nvme cache. After that it would be ext4

You may check other Topics tagged btrfs to see that BTRFS is a worst FS for storagenode at the moment, especially when it collected more than a TB of data.

I don’t know about BTRFS, but with ext4 I never had a Fatal error timout crash, on 6TB stored or 4TB stored, both drives on a NAS with 1GB RAM, no ssd. The machine makes same earnings like a 18GB RAM node with 10TB stored. Small drives are Iron Wolfs 8TB. So my go to fs is ext4.

1 Like

I am running 2 nodes side by side right now

EXT4 nvme cached


BTRFS nvme cached with the BTRFS metadata pinned to the nvme cache.

At this point the BTRFS file system is doing like 20% better performance wise than the ext4. However the nodes are small. Will continue to monitor and watch their performance. I may very well be moving the BTRFS to ext4, but right now it dosent seem that way.

Is this pinning a Synology-specific feature?

1 Like

I don’t believe so. Synology is the only out the box, but I’m pretty sure you can use BTRFS DUP on metadata to the nvme and that is what Synology is calling pinning.

just following up on this thread because it was the most cromulent when I was searching.

I’ve been working on migrating to new disks, and the old disks are btrfs, and it’s been totally the wooorrrssst

The first drive had 2.5tb of storj data on it. The initial copy took two weeks. The btrfs source disk was only showing about 2MB/s of read performance while simply traversiing the file tree looking for files to copy. Then when real files were being transferred the performance fluctuated between 2MB/s and 6MB/s. At the tail end there were some bigger spikes in speed, but it was mega slow. Then the final --delete rsync took about a day.

The second drive has 5.5tb of data. I used the long wait time to “prepare” it as much as possible. Did a btrfs defrag, made sure the noatime attribute was set on every file (as well as drive mount options), switched the btrfs metadata from dupe to single (no idea if this really changed anything). And it was still mega slow, but not quite as bad. The initial copy of 5.5 tb took about a week.

Yes, BTRFS is known to be slow for the storage node data, and thus not recommended.

If you had BTRFS – you could simply do btrfs send | btrfs receive. The slowness is because you were attempting to copy file by file. Storj creates a lot of small files, so most of the time of course will be spent seeking, even if you have metadata cached. .

Sending filesystem will just stream everything at max sequential speed.

big mistake.

Oh god…

Read about btrfs send/btrfs receive, and review this: Zfs: moving storagenode to another pool, fast, and with no downtime. It is about ZFS, but with btrfs you can do the same thing almost verbatim. Otherwise why are you using btrfs if you are ignoring all its features and advantages, might as well use ext4 then:)

1 Like

Maybe because Syno is pushing hard the adoption of btrfs, with all pros and shiny things. And if you don’t dig dipper, or you are a novice, you go with manufacturer recomandations.

btrfs is a fine fs, when configured properly. There is nothing wrong with it inherently, it’s far superior to ext4 in pretty much every respect, both features and performance. I saw an article somewhere (could not immediately find) where ext4 developer recommended btrfs for all new deployments, not ext4.

So making btrfs default is a right move.

But Synology also pushes idiotic configs - like BTRFS overlay over RAID6 madam raid on 4-disk array, on 2 GB devices – this has no chance of working with any decent performance. In this case, ext4 woudl be much more preferable – not because it’s better, but because on such a constrained system btrfs is just choking, and choking btrfs is worse than freely breathing ext4. That’s the actual problem, not btrfs itself.

may be you share your knowledge’s and put some recommended settings and and ways how to cook it in proper way?

1 Like

Agree that the new fs are better than the old ones, but only if you use those features. Here we just try to find the sweet spots for storagenodes, and as I see it, simpler basic fs like ext4 with no extra toppings are the way to go.
I always choosen ext4 on Syno, even when I had no nodes, just because it says “more compatibility”, but I never tried to understand all those extra features other fs offers and I stay away from RAID. I imagine that in case of a system crash, I can take the drive out and recover the data on a PC, with older simpler solutions, like basic ext4, no RAID.

I no longer use Synology, so I won’ be able to test what I write, and therefore it won’t be that useful, but many recommendations for ZFS apply to BTRFS as well, especially when it comes to storage node – stuff like disabling atime updates and turning off sync.

From memory, specific to btrfs, you might want to mount the volume with space_cache=v2 and compress=zstd flags, and rebalance the trees periodically (with btrfs balance). Obviously, don’t use raid6 on arrays of fewer than 20-30 disks, and if performance is paramount – raid0, 1, and 10 will behave better than raid5. This is however not as important as proper caching and mount configurations.

Attaching sufficient amount of bcache to fit metadata is helpful; further you can set nobarrier flag to reduce flushes, as long as there is power protection available on the storage devices.

“Better performance” is one of those features, “better space efficiency for small files” is another. Point in time consistency (snapshots) and send/receive functionality. There is no reason to use ext4 when btrfs and zfs is available, supported, and enough resource are available.

This would be pointless. Running storagenode in isolations is neither viable nor realistic usecase worth optimizing: you are supposed to usitlize unused resources on the existing arrays. Which implies you already have an array configured to perform job you need it to perform in the optimal way. And you then share some space from it with storj.

So you cannot change array topology, but you can change filesystem features on subvolume/dataset granularity (there is no such thing with ext4) and control other filesystem options with mount parameters, specific to the filesystem.

Well, if you decide not to look into it – you would not know what you are missing :).

This is not just about uptime – any raid will have some protection against disk failure, but modern filesystems like btrfs and zfs not only protect agains cases where disk is reporting failure, but also from cases when it fails silently (bit rot), via checksumming.

Plus infinitely better flexibility allows you to tailor your storage performance to a specific task, on dataset/subvolume level. It’s very powerful.

Unfortunately, but not surprisingly, synology just plops it on the device, writes an awesome marketing story and then we get people on forums saying that “btrfs is not recommended and slow” as a result. It’s a synology problem, not btrfs.

Interesting, TIL there is a “btrfs send” command. Does it require that the receiving file system be btrfs as well? In my case I was attempting to migrate to a disk that was ext4 filesystem so maybe that wouldn’t have worked for me.

The old btrfs disk had been used for generic home file storage/backups/etc, and so btrfs was fine, but as the space was gradually consumed by storj it seemed like just having ext4 on the new home disk would be more appropriate. If a disk is just holding storj and chia, I don’t need it to be fancy, I just want it to be fast.

and incidentally the second transfer of 5.5tb really did go way faster than the first transfer of 2.5tb (they each took about a week, despite size difference). So I attribute that speed up to… something, I would guess doing the btrfs defrag in advance while I had a week to wait before starting… why the ‘uh oh’ for that?

That’s an interesting recommendation.