Disable file walkers temporarily while migrating

I’m migrating disks and the file walker is causing contention with rsync and causing everything to take at least twice as long. I do not want to shut down my node but would like to do something to speed up this transfer. The old disk is not failing, if it were I would shut down the node during transfer.

Does --piece-scan-on-startup=false completely disable the used space filewalker?

Is there anything else I can add to my docker run command to let my node run and speed up my rsync migration?

I’d simply use dd to copy the disk at the block level but I’d be copying 6TB of blocks for 3.6TB of actual data on the node. It still may be faster than rsync though…

That will disable the used-space filewalker, but will not stop other filewalkers like garbage collection and cleaning up trash (which you might be able to skip, and just trigger a used-space on the new disk once everything is transferred over).

Otherwise if you are concerned about online score but don’t care about node growth, set the disk usage to less than the current used size to stop ingress.

Using dd will probably be a lot faster, it will be a sequential read/write vs having to deal with a filesystem and more random IOPs, although copying files over might defragment a little.

2 Likes

I’m going with dd. Disabling the used space filewalker didn’t help much. Even with the nodes shut down, rsync only got less than 5MB/s. Getting over 200MB/s right now with dd.

As counterintuitive as it sounds I’m cloning the entire disk including my personal data. After the clone I’ll delete my personal data from the clone and the storj data from the original.

What I’m doing different is that I’m cloning the old partition into an LVM volume on the new disk. I’m slowly going to switch all my storagenode data into LVM volumes so I can use block level moves while leaving other data in the disk alone.

2 Likes

Does it work? I believe it will clear the bits of information to allow it to be recognized as an LVM volume…
Perhaps you would need to recover it later:

So, just saying… If you would be forced to go over that later, why do not do it before? Then use an easy and painless move:

P.S. I would answer to myself - in the latter case (to convert it to LVM before migration) there is no backup…

An LVM logical volume and a disk partition are both block devices. After creating an LVM lv (larger than the disk partition) I could run dd from the disk partition to the lv directly. Since it was a clone I then wanted to relabel the filesystem, change the UUID, and enlarge it to fit in the lv (e2label, tune2fs, and resize2fs respectively).

I haven’t tried converting a disk partition into an LVM volume but it is possible. There is a software (blocks) for the task, but I haven’t yet tested it myself. I believe it just automates the steps in the article you shared a link to.

n.b. When using LVM you can skip partitioning your disks and run pvcreate directly on the disk block device (pvcreate /dev/sda)

Please report there a result