How to copy Storagenode Data


I want to copy my data from my node to a new disk. does anybody a way to do that fast ? Right now I am using rsync but thats very slow. With this speed … I will be ready in 11 days. But i have only 5 days otherways I will be disqualified cause i am suspended cause a disk failure.

Thanks for help

it’s the only way to do it without downtime. Similar options would have similar performance as well. You could stop your node and use a simple copy, which will likely be faster, but your node has to be down for the entire time, so it’s not the preferred option and only possible now because downtime disqualification is currently not active. And you’ll be missing out on income and data on your node may be repaired because pieces are marked as unsafe. So you could also lose data.

1 Like

If you are currenty rsyncing to a USB connected drive or via network, I’d recommend going for native SATA connections instead.

Just a general hint, since no one knows what hardware you’re exactly using :wink:

I used rather slow drives and it took about 24h to clone a ~2TB node with robocopy.

1 Like

Does robocopy exist for linux ? Cause rsync is veeerrryy slow … its scans every file

I moved >6TB to a new 10TB drive using rsync (Ubuntu 20.04). Create a script for

time rsync -aP --inplace --delete /mnt/stgpool1/storjFarm/ /mnt/hangten/storjFarm/

time was helpful for tracking how long rsync took. I kept running the script until rsync only took a couple of mins. Then I changed storagenode to point to the new drive.
–inplace speeds up rsync by letting it skip files that already exist in target
delete removes files (Storj shards) from target after storagenode discarded them on source drive. Otherwise you’ll have lots of stranded shards on target.


Ok thanks but i didnt understood the point …with the storagenode.
I can run the node on the new drive without the full access to full data ?

The copy process is slow becaquse of those little file sizes. robocopy might not be quicker.
Please refer to the “how to migrate” a node in the storj manuals.
You make a rsync. 11 days while letting the node run. When finished you repeat that. It only syncs the changed files. Lets say this only takes 2 days. Then you do it again taking a few hours. You do it as often as the time it takes doesn’t vary any more. Since your ingress is limited the new data won’t be that much size.
When your iteration is done you stop the node and do the rsync with --delete option so it clears out a in the meanwhile deleted data at the target. After that sync process you’re done.

If your data ingress (=database change) is faster than you can sync, then your HDD speed is to low for Storj anyway…
I have moved 8TB node via SATA-to-SATA with no problems. Using USB otherwise could be a problem.

Nope, it seems rsync is the way to go für linux users :slight_smile:

The problem is for me is it not possible to run the node and copy at the same time cause my server crashed and i copy it on a different pc. Do i get disqualified after 7 days when i am suspended and my node is offline ?

Hello please answer me

Dq is not in effect right now

Sorry what does dq mean ?

I would like to point out to you that the people currently helping you are doing so in their own unpaid time.

You currently won’t be disqualified (DQ) for downtime or for staying in suspension for too long. Though that last one may change after the next version is completely rolled out. In general my advise is to avoid down time anyway, since that is how you have to manage your node in the future as well. But if you see no way of doing that, you should be fine right now.


Since your node is down, why not clone the drive? Some Linux guru please verify this, but something like:

dd if=/dev/sda1 of=/dev/sdb1

For Windows there is plenty of cloning s/w.

Probably his source drive is so slow because of it dying, in that case imaging wouldn’t necessarily speed up the process compared to coyping file by file

No, it will be disqualified withing a few minutes. Only one node (with the same identity) must be online in the same time. If it would not have a full data - it will be disqualified.
So, you can run a source node, while rsyncing, but must stop and remove it before run a clone, then rsync with a --delete option last time, then run a clone.

It sounds like the HD is not the cause of the crash. If that is so, can you not just connect the old HD to the PC and start the node? I think we are still missing a piece of the info puzzle.

Suggest adding a decent block size as it defaults to 512 bytes. Something like
dd if=/dev/sda1 of=/dev/sdb1 bs=1M


Good thinking, bat man!

We also need count=something, so dd knows when to stop.

dd finishes either when it reaches the end of the source or the end of the destination, whatever comes first.