How to move to a new server

I’ve got an existing server built and running - currently online. I also have a new server with a larger, mirrored drive.

I set everything up on the new server, stopped the old server, and rsynced the data from old to new. I then started the new server. The new server would restart every few seconds. Unfortunately I got frustrated and wiped the server before copying the logs.

Are there any other steps that are needed to move the node over?

1 Like

Have you followed this ?

https://documentation.storj.io/resources/frequently-asked-questions#how-do-i-migrate-my-node-to-a-new-drive-or-computer

Just a unsolicited advice, don’t take any steps when you are frustrated. Give yourself time to cool down.

2 Likes

Yeah, I copied over the data, as well as the identity folder, and updated my docker launch command. Is that all there is? The error I received was that database tables were missing (if memory serves)

Yes and that works just fine. I hope you didn’t stop your node while it was doing an upgrade to latest version which did split the database file in to multiple small ones. Give it another try and this time post the error message so someone here can help you diagnose the issue.

1 Like

OK, so I tried this again today. I’m running virtual machines, with all storj data stored on a second hard disk.

docker stop -t 300 storagenode
cloned the vm
created a new larger second drive on the new vm
rsync all /storagenode data to the new vm
power off old vm
powered on new vm with the same ip address
docker rm storagenode
docker run -d --restart unless-stopped -p 28967:28…
At this point it doesn’t start. Here’s the error message:

   2019-10-10T16:35:09.889Z        INFO    Public server started on [::]:28967
   2019-10-10T16:35:09.889Z        INFO    Private server started on 127.0.0.1:7778
   2019-10-10T16:35:09.894Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      sending {"count": 2}
   2019-10-10T16:35:09.894Z        ERROR   version Failed to do periodic version check: Get https://version.storj.io: context canceled
   2019-10-10T16:35:09.895Z        INFO    orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      finished
   2019-10-10T16:35:09.895Z        ERROR   orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs      failed to settle orders  {"error": "order: unable to connect to the satellite: rpccompat: context canceled", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context canceled\n\tstorj.io/storj/storagenode/orders.(*Service).settle:257\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:196\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:175\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
   2019-10-10T16:35:09.895Z        INFO    orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW       sending {"count": 170}
   2019-10-10T16:35:09.895Z        INFO    orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW       finished
   2019-10-10T16:35:09.895Z        ERROR   orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW       failed to settle orders  {"error": "order: unable to connect to the satellite: rpccompat: context canceled", "errorVerbose": "order: unable to connect to the satellite: rpccompat: context canceled\n\tstorj.io/storj/storagenode/orders.(*Service).settle:257\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:196\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:175\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
   2019-10-10T16:35:10.000Z        FATAL   Unrecoverable error     {"error": "bandwidthdb error: database disk image is malformed", "errorVerbose": "bandwidthdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Summary:120\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).MonthSummary:73\n\tstorj.io/storj/storagenode/monitor.(*Service).usedBandwidth:174\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:83\n\tstorj.io/storj/storagenode.(*Peer).Run.func5:409\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

I should add that when this didn’t work, I simply powered down the new VM, re-ip’d the old one and turned it back on. I’m up and running again.

I just need to find a way to copy my info to a new computer / VM

Are the old and new systems using the same CPU architecture etc.?

Yes, they both have dual 5670 CPUs. Different amounts of ram though.

After using rsync you should stop and remove the container, then rsync again with a --delete option.
After that the remained files will be successfully synced.

3 Likes

ahh, OK. I did stop the container and rsync, however I did not stop and remove the container then rsync again.

I had also not used the --delete option - that makes sense.

This worked right away. I’m up and running on my new VM.

shutdown and remove the old container, rsync with delete from old to new, then fire it up on the new VM.

Thanks

1 Like

Hi all,

Didn’t find the answer in FAQ so posting here. I’m in the process of migrating a storage node from one VPS server to another. Both run on Ubuntu and have different IPs.

The problem is that the data transfer may take a while. It took about 2 hours to copy 85GB of data out of 600GB. I’m worried not to lose the node’s reputation and get banned for being offline for too long.

Or is it OK to copy files while the storj node is running?

You an use rsync to copy the files. Then again to copy the differences. After that, stop the old node, run rsync again with the delete option on to make it 100% up to date and start the new node. The down time should be minimal that way. The copy will take a while still though.

3 Likes

Thanks for your prompt reply. Will try that and post the results here once I finish migration.

I have done the same way @BrightSilence describes.
Works perfectly.
And as he says it does take some time.

I have successfully migrated my node from one VPS to another. Posting here the commands I used just in case anyone else might be in a similar situation:

  1. On the old VPS under root user copied files to the new VPS (time-consuming, don’t stop the node just yet)
    rsync --ignore-existing -raz /home/user/storj/ root@49.49.49.49:/home/user/storj/

  2. Ran it again to copy differences
    rsync --ignore-existing -raz /home/user/storj/ root@49.49.49.49:/home/user/storj/

  3. Stopped the old node and removed its container
    docker stop -t 300 storagenode
    docker rm storagenode

  4. Copied files again but with the delete command this time. This deletes files in the destination directory if they don’t exist in the source directory.
    rsync --delete -raz /home/user/storj/ root@49.49.49.49:/home/user/storj/

  5. Started Storj Node Docker with parameters of the new VPS (IP, disk space, bandwidth)

Thanks for your help, guys!

5 Likes

How do I copy the data, which folders should I copy, and is it just by copying the data and starting the docker container with new storage location will work or not.

Please move the identity and data.

1 Like

I just moved my data directory to a different hard drive and want to share with you how I did it.
The solution Alexey provides works but it takes a relatively long time for rsync to scan all the files on the second run and find the ones that are left to copy. I am talking about several hours during which of course the node must be offline.

A better solution is to use a tool called lsyncd.

It simply mirrors the directory and syncs everything live.
For a migration you would proceed like this:

  • set up lsyncd
  • wait for lsyncd to copy most of the files (several hours)
  • stop storagenode
  • wait for lsyncd to copy the rest (few minutes if not seconds)
  • stop lsyncd
  • set up storagenode with new data directory

This way you get minimal downtime.
lsyncd has a lot of options, I recommend reading the manual and working with a Configuration file.
It can also operate between two computers and use rsync for the copying process.

1 Like

Welcome to the Storj Forum @Aldabro

@Alexey or others can definitely point out if this option is better than rsync. Thank you for sharing!