Hi , i just wondering if anyone can help that i willing to move my node to new location its mean new server . how could i smoothly move it without down time and how to move all data to new server ?
There always be some downtime but you can minimise it.
What @Tropic pointed out is a good start.
If you move to new location not so far away (like same city…even if it is Los Angeles or Tokyo) and you have some couple hundred gigs on a hard drive then best option would be physically taking a hard drive to new location.
Otherwise (especially moving to bigger disk) use rsync when node is running. When rsync done its task , stop a node on one end re-rsync again - this time it will make only changes that happen thru syncing time. Start a node in new location with fully synced data.
To minimise even more you can rsync many times when node is running and only the last sync do on the stopped node. Happy Storjing!
Whats about zip and wget from new server and unzip ?
I think that would take ages if you have a lot of TB stored already (and millions of files).
Besides, STORJ files are almost not compressible because they are encrypted files, so I doubt zipping them would help.
Maybe multi threading with rar with command shell ? hmm but yess have to give a try to zip it …
using rsync or robocopy with the /MIR can copy the contents of your running node.
If you want to zip it, you would have to stop the node first, which causes longer downtime.
Why don’t just use rsync?
I don’t know how much data you have but even if it take a week your node will be running the whole time. Only last sync needs to be done on stopped node. If there won’t be much traffic in pre-last sync then there will be nothing to sync on stopped node therefore your down time will be couple of minutes(the time of running rsync sync check and starting command) which is more than awesome. Copying data by rsync is only one command and even if breaks for some reason you run the same command again and it starts where it left off.
You can rsync to local disk -final destination- for your future node and physically transport disk to new location after which run rsync thru internet for verification.
If you have decent internet connection you can copy files straight to new drive via rsync (HOW AWESOME IS THAT?! ) that will become a node. For slower internet connection, rsync gives you an option to “zip” data on the fly
Done both of above, works like a charm.
Zipping is Reading whole thing and writing whole thing after which unzipping do the same. It puts unnecessary strain on a drive.
Zipping and unzipping puts files into danger of bad sectors or some other memory-related (you have ECC memory, don’t you ) stuff which can end up loosing files and disqualification when you start your node after all that hassle.
Don’t bother with zipping, there are so many files that it is not worth your time and extra bill on electricity.
+1 point on creativity though
Additionally there will be no benefit of compression with zipping as you’re moving encrypted blobs which can’t be compressed as it’s random noise. I don’t think any such options would give you any benefit. Better to just rsync. And yes, if possible do the first run by physically bringing an hdd to the other location.
@BrightSilence oh, yeah. Thanks for mentioning that. I think the same that It won’t find any pattern to compress any file
Just finished moving my 18TB node with robocopy between two servers. Took about 3 weeks!
Accidently started the new node for a few minutes halfway through and got DQed on one of the satellites I didn’t have much data from. Kinda sucks on a 3 year old node to get DQed so quickly.
Oh that sucks indeed, I feel you
At least if you hadn’t much data from that sat’ I guess you’re not gonna miss out much, but still…
I agree DQfication is way too fast on old nodes. It feels like any node is still at high risk even after fostering it for months or years…