As youāre running docker on Ubuntu you should execute the āexit-satelliteā command with the same parameters as your current docker run command. Then enter the satellite domain names one at a time, not all of them at once.
You can exit the satellites one by one or all at the same time. The satellites you requested graceful exit for will need a few hours to create the list of pieces that need to be transferred. Donāt expect high traffic immediately.
But the process itself is sooooooo slow: eu1.storj.io still 0.00% after 24h, saltlake.tardigrade.io ~2-3% (used 500-600 GiB storage per node).
My server: 2E5-2689, 2Intel i350 (8rx/8tx queues), 1Gbit/s guaranted chanel (colocation).
In general, the average output speed should be at least 50% (500Mbit/s) of the bandwidth of my channel. And if the speed is lower, then this means either the satellites are bad, or the nodes cannot work at normal speed.
So, I gave your suggestion a try and I got the following error message.
āYou are not allowed to initiate graceful exit on satellite for next amount of months:
Error: You are not allowed to graceful exit on some of provided satellitesā
I find this message quite a surprise as all my nodes were created around February 2020.
This is likely US2 satellite, itās 4 months old, so no one can exit from it gracefully at the moment.
You can take a look into your logs, there would be exact date when your node would be eligible to exit from the satellite.
You can also add --log.output=stderr option to the graceful exit command and see these messages right on your screen.
I have no information about any such limits.
I pinged the team though.
The transfer is happening between your node and destination nodes, the satellite is not involved in data transfers. So, if there are limits, they likely on destination nodeās side. Or your router canāt handle the load.
I have had a ticket where operator was unable to exit successful with Mikrotik router. I did not deep into details of the Mikrotik configuration, but when the operator connected their PC directly - the GE was finished in a few hours.
So, I would suggest you to try the same.
Errors was exactly like yours, also - āno route to hostā
If the limits depend only on the receiving nodes, then they are made of shit and sticks. My server can send data much faster and in 100+ parallel transfers. Also I dont use Mikrotik.
You are welcome!
I would also suggest you to use zkSync (L2) to receive payout, if you see that the owed sum would be less than 4x transaction fee for ERC20, because payouts on L1 (Ethereum) are subject of Minimum Payout Threshold. There is an emergency payout though, when you exited from all satellites or disqualified on them and doesnāt have other nodes with the same wallet.
In general, your node will be disqualified on US2 after two months of offline, but this is mean that your payout on L1 could be held until that.
I have been farming Storj since the v2 days. I have more than 200Tb of available space, speedy fibre up an down links, and almost perfect uptime.
Over a year of farming v3 has yielded 7Tb of data.
For me, farming Storj simply does not make economic sense since v3.
I conclude that Storj implements a centralized system with many arbitrary restrictions and limitations that uses blockchain technology for payment.
All the best. If I get my exit-fees great if not, so be it.
I suppose in the one location. We treats all nodes in /24 subnet of public IPs as a one node, because we want to be decentralized as much as possible.
So this amount of space is unlikely will be filled for a long time.
The Community proved that the maximum used space for the one location could not be greater than 20TB:
Exiting is progressing very slowly in my case as well. (One satelite is at about 5% and I started exiting days ago).
Is there anything I can do to speed this up?
Yes, there are several options, but changing them is very dangerous operation:
After config change you should save it and restart the storagenode.
If you increased concurrency, please, monitor your logs for transfer failing, if the number of failed transfers increasing, then you need to reduce the concurrency.