How to move to a new server

Thank you!
However, I don’t understand why we have to run it again to copy the differences (step 2).
Since the container is still running at this step, why do we need to run it again? Why can’t we stop the container just after the end of the first command?

Because the first command takes a long time. You repeat it to copy over the differences that have accumulated since you started the initial copy, then you shut the server down and create the final state. The second pass reduces that downtime significantly so that it’s on par with lsyncd.

1 Like

Hello all,

I have a Storage Node running on a VM running Ubuntu. The host OS is Windows 10.
For this reason i would like to scrap the VM and use the Windows Storage GUI Install.

Is there a safe way to migrate the data from the VM to the host safely, and keep the reputation intact?

I’ve read an article on this forum but not sure if this is what i need. (https://documentation.storj.io/resources/faq/migrate-my-node)

Thanks for the feedback.

Cheers,
Tiago

There’s no easy way to copy all the files from A VM instead of running windows docker which runs a VM on its own. You will have to sftp into it and you will probably run into permission issues when doing so you will need to give 100% permission to beable to download everything. Depending on how much data you have as well this could take a very long time to do.

How much data is your node storing currently?

The recommendation would be the same - use the rsync tool while the node is running.
Do not run the clone until you finish the migration.
For example:

  • /mnt/storj is your current data location
  • 192.168.1.31 is IP of your new server
  • /mnt/storj_new is a destination on your new server
  • user is your user account on your new server, which have at least a write rights to the /mnt/storj_new on your new server

The command would looks like:

rsync -r /mnt/storj user@192.168.1.31:/mnt/storj_new

You should run this command a few times, until the difference would be negligible, then stop and remove the current container and run the rsync one more time but with a --delete option to delete removed source files from the destination. Then run your node on a new server with correct parameters. Of course, your identity should be migrated too the same way.

Hi

  I just hit a problem during my last migration to a bigger datastore, I had to  migrate ~1.5 TB of data to a new environment and unfortunately took some time, all this time  the node was offline.
  I was thinking if it is or it will be possible to put the node in readonly/restricted mode in order to sync the data to the new node destination. With this, downtime of the node will be reduced significant and data will be available to the network in readonly mode.

it is very simple make other way.
You just install sync software. sync first time it takes long time.
Sync second time, takes much smaller time.
Stop node, sync third time takes 10 minuts, as it synk only new data.
Thats all start node on new location.

1 Like

Vadim

That`s the standard migration, unfortunately for me it did not took 10 minutes the last sync, as you have large amount of data sliced to small files it will take much longer, also if you take in consideration that not everybody is using the newest hardware for this project.

Kind regards.

Just rsync a few times while the node is running, when the difference would be negligible - stop and remove the container and rsync one more time with a --delete option. In such case the last step will not take too much

2 Likes

now i know why upgrading hard disk to a larger size can be so much pain … they are so many small files needed to rsync over… a simple rsync for 1TB takes hours …

Yeah. It seems to be easier just start another node

It is even worse if you have to do it online.
I think I have even read somewhere that it would be faster to tar the data and move them but haven’t tried.

I use “rsync --inplace -aP” because it preserves all attributes, thus subsequent syncs are as fast as possible. Thus

  • rsync
  • rsync again (with --delete)
  • stop the node
  • rsync yet again (with --delete)
  • start node with new disk

Frankly anything that does reading and writing in separate processes is a win compared to “cp -r”.

If the node doesn’t have enough RAM to keep all the disk metadata in its cache then unfortunately the subsequent syncs will still take quite some time. On my system “du /mnt/storj/blobs” emits approx three subdirectories per second and there are 5000+ of them. rsync cannot possibly be faster than that.

rclone is a lot faster in my experience.

1 Like

manage to finish sync my 1TB disk. toke nearly a day to finish full sync… the subsequence runs are much faster. not too hard to do so… just waiting time for 1st run is painful.
how alexey says… spinning up a new node are much less painful…

Page not found. :frowning:

Help!

1 Like