I know that there is no real good answer to my problem. But I would like to know what you guys would do in my situation.
Let me explain.
I have a 3.5TB node hosted in another house.
Nobody lives there since 1 month (the house is gonna be sold).
The Internet connection will be cut off on Nov, 9th.
My current house has a low Internet connection (ADSL, not fiber) but it is enough to let my other nodes run properly.
We are currently under lockdown in my country
My mission: Keep the node running after the internet connection is cut off.
I assessed several options: 1. Go physically to my old house, take back the node and make it run in my current house. Downtime will be ~1h30. It should be good but, as I said, I am under lockdown so this option seems to be illegal…
2. Migrate my node remotely to my current house using the official migrating procedure.
This would be a very good option to consider if my current Internet connection wasn’t be that low… Based on my estimations, it would take about 170 days…
My rough estimation:
Volume to be migrated = 3.5 TB
Transfer speed = 250 KB/s
So Transfer duration = 173,985943703704 days
3. Move my data temporarily to a Cloud Provider just to have some more time to migrate the data to my current place. But this is too expansive and it’s not worth it, even if we consider the free tier of big cloud providers.
I was even thinking about using several VM on multiple Cloud providers and set up an architecture based on Ceph or GlusterFS (just to reduce storage costs on each of these cloud providers and being able to enjoy free tiers…)… But the internet latency and other technical obstacles wouldn’t make it…
I know this is kind of “Mission: Impossible”
But I want to have your opinion, maybe I’m missing something, a very clever and simple idea…
Basically, you couldn’t just, for example, let me get the data and identity from your node and run it as my own. Or give (or sell) the server with the data to someone else, who would continue to run the node as their own. You can sell the hardware, of course, but not the data.
Obviously, you can have someone else pick up the server and deliver it to you.
This has come up in a topic previously.
If everyone is under lockdown, can’t you extend the internet connection for a month or until the lockdown is over on the account of nobody being able to do anything with the house during lockdown anyway.
It could have been a good idea, I didn’t think about it.
Unfortunately, I think it won’t make it since:
3.5 TB is huge and I’m not sure I will be able to reduce it drastically so that transfer duration will be done before the end of the week.
The node is on a Raspberry Pi and I think compressing all the data will be very time-consuming (and impact drastically my uptime score and I’m not even sure it will be fast enough to transfer the data by the end of the week).
The only solution is to go there physically. Everything else will take way too long.
I guess you might still be allowed to go out of the house for very important reasons? Maybe you find a reason and can make a slight detour to your old house? (unless it’s very far away…)
Can’t you think of an excuse to go to your old house? Like having to close the water valve or something similar. The point of a lockdown is to stop unnecessary interactions between people, not putting you under a house arrest.
If hardware in your old house has a right wireless card you could try to use it to intercept neighbors wifi auth sequence, bruteforce the psk and then hope that you could use that connection long enough. Another not so funny option would be to just ask the neighbor to leech off his wifi for a while
Migration to a cheap hosting is a fair idea if upload bandwidth allows you to transfer the node in time. Look at hetzner auction, their cheapest server with enough space is like 30 eur per month. See if your node brings enough profit and has enough held amount to justify this.
Actually I also would like to go there in order to put the heating and avoid having humidity problems. The house is not that far away (30mn away). I just have to find an appropriate and legitimate exemption (not sure that this will be OK… But I can find a better reason ^^).
I finally decided to simply move my node physically (using an more or less true exemption). I was quite sure there wasn’t other solution but thanks for having helped me to explore other ways
Just a question:
My node was hosted on a Raspberry Pi and it is now on a Debian VM (x86).
For the RPi, I was using the docker run options:
--log-opt max-size=50m \
--log-opt max-file=10 \
Since I’m no more on a Raspberry Pi, I think these options are no more required.
Could you confirm that I can simply and securely remove them from the docker run command? Do I need to “convert” the existing log files to fit with the new configuration?
There’s nothing you need to do. If docker is still handling the log files, it will simply continue on without splitting or limiting size. If you’ve redirected logs to a file (recommended), these options are redundant.
With regards to the log file, as @baker said redirecting it to an external file is recommanded as it wont get cleared everytime you (or the watchtower) upgrade the node.
I’m personally doing this by using option --log.output="/var/log/node.log, and then configuring docker so this path (which is within the docker container) gets mounted on my host. For instance: --mount type=bind,source="/home/pi/storj_logs",destination=/var/log
And I set logrotate up so the log in split per month, gzipped, and deleted when it’s older than 4 months
docker exec -it storagenode ./storagenode run --help
Any parameter, passed after the image name will be passed to the container (if it’s support that).
It is similar to the example above, where I used an additional command ./storagenode run --help in the docker exec