Zfs: moving storagenode to another pool, fast, and with no downtime

Have you tried this? Zfs send from full 10TB node will be copied approximately 1Tb/day. If your node running while copying, second snapshot will be copied at least twice slower

for zfs, it is faster to just add the new disk as a mirror, and let it sync in the background.
Once it finish syncing, remove the old drive, and expand the disk using zfs expand command.
No more messing with zfs send and recv

For lvm, add new disk and use pvmove

3 Likes

Well, when making technical decisions you can’t go by popular vote :slight_smile:. ext4 has its place still on low power IOT devices, where memory is still constrained. For anything half-serious it’s too limiting.

It can probably be made to work, but you are still going to be moving files one by one – and the officially recommended approach with rsync already does it without involving additional filesystem complexity.

Why is that?

Either way, speed of copy irrelevant, specifically because the node keeps running. It does not matter how long it will take, your last sync that needs to be done with the node off is very short – couple of seconds. I use 4 passes.

Only ZFS and LVM would allow you the safe live migration as far as I know, the mergefs is a worst option of any as far as I know.

1 Like

Interesting. mergerfs is slow on opening files, as it has to verify existence of each file being opened, whether to read or to write, on all its legs. I wouldn’t be surprised if migration went faster the usual way with rsync. No hard data though.

You’d usually disable uploads for the duration of the migration… and hopefully GC as well.

Younger than zfs.

Depending where you see it as a new filesystem or just further development of ext3. I prefer the latter version of the story.

Depending on your configuration. You can also config ff (first-found) for reading, wouldn’t do that for deletion though.

2 Likes

No it’s not, work started on ext3 about 3 years prior to zfs even becoming an idea.

1 Like