Quickest way to move multiple nodes to new disks

I’m in the process of moving 4 nodes ranging from 4tb to 10tb (used space) to new disks. The current disks are XFS, the new disks are ext4. The new disks will be placed in a new server, but for the move i’ve put them in the old server for a local rsync. I’m following this guide: How do I migrate my node to a new device? - Storj Docs

I am looking for a way to speed up this process, especially the initial sync is taking a long time. I have about three weeks to move the four nodes to their new disk. Is there a way to speed up the initial sync? I have already configured the nodes to have overusage so no new data comes in while the sync is running.

Stop the nodes and copy the data. Faster would be cloning, but I don’t know if it works between different fs.
If the system is powerfull enough, you can do it simultaneously, else do it one by one.
For me, migrating one node from internal SATA to external USB3.0 controller, took like 1.2 days per TB.

Do you really need to change FS? I mean, both XFS and EXT4 are stable FSes.

Cloning these disks will take a day or two per disk. Rsync in the other hand may take a week or more per disk, also depending on the quality and type of disks.

Reading the forum for the past 3 years, I can definitely say: go with ext4 even if you stay offline a few days. You have 12 days untill you get suspended, and 30 for DQ.


No, I don’t really have to change. I’ve had really good performance and experience over the last few years with the XFS nodes. But considering this will be a big step up for the nodes in terms of size and resiliency, I thought this would be the best time to optimize where I could. From what I’ve seen on the forum here, and from my research for best file systems for a lot of small files, ext4 seems to be the preferred file system. That rules out cloning unfortunately.

Will stopping the nodes significantly improve the rsync? Remember, the nodes are already no longer receiving new data. Or do you mean using CP instead of rsync while the nodes are down?

The server is more than fast enough to handle multiple transfers at once, with a dedicated LSI hba for the nodes and the new hard drives.


I think you should go offline with your nodes and then copy them in one rush.

My experiences with Rsync is very very mixed. My view is that it is not well suited for this kind of task due to non existing parallism, zero knowledge and not suitable for huge number of small files.
But sometimes it surprises me.
If your destination is completely empty, it could be that Rsync is really fast. But don’t keep the node running, the continuous subsequent runs to check what has changed while it has been running could eat up your initial speed advantages.

You can try to run several instances of Rsync in parallel. Meaning you run one for each satellite folder. So that would give you 4 for the blobs and 4 for the trash. But then you need to be careful that the correct files land in the correct folders. A final Rsync run over the full copy is mandatory then.

I played around with something different. Something like:

{ time -p (nohup sh -c 'tar cvf - --sort=name --ignore-failed-read -C /path/to/source/storagenode . | pv | xargs -n 1 -P 32 $(tar xvf - --keep-newer-files -C /path/to/destination/storagenode/)' > /root/nohup_tar_.out 2>&1) >> /root/nohup_tar_.out 2>&1 & }

Where the xargs -P settings would allow to set the number of parallel tasks.
It had some hickups, but did work. I had it run when rsync took ages for a single satellite folder and I have used that script to copy the other satellite folders while Rsync was busy with that single one.

Also at the end, a final Rsync run would be mandatory to catch all the things that might have been missed.


From my experience your 4Tb disks will do ok and take a few days. I gave up on rsync with anything larger and reverted to offline sector copy, then extended the drive space if copying to larger drive.


I haven’t seen much trouble with XFS so far. The step you’re taking to convert to ext4 implies a risk. I actually think it isn’t worth it, but that’s up to you.
But I really doubt whether it’s really an optimization.

It dosen’t serve ingress, but still serves egress, walkers, trash cleanup, etc.
I don’t know linux. I moved from windows.

If you are moving to ext4 from xfs, I would also recommend to set it up on an lvm

1 Like

You could exclude the trash folder for the first passes hoping that it will clear out by the time you do the last rsync passes.

I would also use single disk volume groups on the new disks. That will make moving the nodes easier and faster the next time you move to a new disk.

1 Like

That means keeping the same FS. But that’s what I do too.

Exact same experience, try to avoid it as the plague if possible. But since it has more options than copy…

Another way I sometimes use for cases like this, is first making a new expandable file system on the biggest SSD I have lingering around. I try to copy as many files to that new filesystem (considerably faster than HDD). Than I copy the file system as an image to the new HDD (also considerably fast) and expand the file system. So for example, in case of 6TB HDD XFS > 10TB HDD EXT4 with a 2TB SSD:

  1. Format the SSD as ext4 (look for optimizations of ext4 in this forum).
  2. Mount the 6TB HDD and SSD.
  3. Rsync data from 6TB HDD to the SSD (fast) > will fail in the end because of lack of space.
  4. Unmount the SSD.
  5. Copy the SSD-partition to the 10TB HDD with for example dd and expand it in order to use the whole disk (fast).
  6. Mount the 10TB HDD.
  7. Complete Rsync between both disks (incredible slow).

Why, if it’s only one disk per node and the whole disk is being used, than it’s only complicating things. Only should be considered if combined with RAID. Which I don’t use in context of STORJ, because it takes care of it’s own redundancy.

Explain? Sounds like not Linux…
Besides, … See up.

It is very much Linux. With LVM you can move the node to another disk online with 0 downtime, if you need to move the node in the future again.

1 Like

@jammerdan I’ll try your suggestion this monday, that seems (at least for the initial sync) to immensely speed up the copy, thanks!

As for the LVM suggestion, it’s a very interesting suggestion that I might consider. Especially considering how long the current migration will take. Simply starting a pvmove would make it a lot easier. But considering that i’ll be switching to ext4, and I assume (assumptions… the mother of all f***ups…) I won’t be moving away from that anytime soon, a dd clone will also be an option in the future. And that will be significantly faster! So the added value of lvm might be diminished by that.

1 Like

I would recommend partclone.ext4 over dd

1 Like

Yes, my experience. If the destination folder is empty this is really fast.
If it has to compare then not so fast. Then Rsync was really good in that.

Maybe the best of both world would be to run Rsync in parallel, something like this:

ls /srv/mail | xargs -n1 -P4 -I% rsync -Pa % myserver.com:/srv/mail/
ls /srv/mail | parallel -v -j8 rsync -raz --progress {} myserver.com:/srv/mail/{}

But I haven’t tried that yet.

Because you can do simple pvmove without using of any rsync:

Jup, only if it’s the same FS.

Yes, this will be an equivalent of dd, but online.

Today most of the trash on my nodes was deleted, I have prepared the new disks with LVM and ext4, and have started copying the data of the first two nodes after taking them offline with the command @jammerdan mentioned earlier.

Based on the initial statistics this will give me an initial transfer speed of about 100GB per hour, or ~28MB/s.

Thanks everyone for the suggestions so far! I was especially happy with the lvm suggestion, as this will make moving to a new disk in the future far easier (and faster) with pvmove. I’ll let you all know how it went when the move has been completed.