Moving to another country: how to prevent from being disqualified?

Hi,
I have currently 1 storj node running on a virtual machine, hosted in my home server.

We plan to move to another country in about 6 months (the Atlantic Ocean is between both ^^)
What would your recommendation to avoid being disqualified?

Thanks a lot!

The disqualification for downtime currently disabled. So, bring your node online as much as possible.
If you know that the node will be offline for a half of year, I would recommend to use a graceful exit when it would be enabled. I think the disqualification for downtime can be enabled back for that long period.

Thanks Alexey!
Actually, I don’t think my node will be offline that long. In the worst case scenario, it should be offline for about few weeks (time to take a flight, move into new home, get a new Internet subscription).
I have other questions:

  • By reading you, I understand that graceful exit isn’t possible yet. So what if my node is off for few weeks? What will be the impacts for my node and reputation?
  • I am think about moving the node to another place (a friend or a sibling) and then moving data through rsync (as described in this post). But I am a little bit lost on what we can do with 1 Authorization token:
    - Can I use the same Authorization token to host several nodes behind the same IP?
    - Can use the same Authorization token to host several nodes with different IPs?
    - If one of these nodes (behind the same IP or not) is offline or has a bad reputation, will it affect other nodes reputation?

Thanks again for your help!

Perhaps it will lost almost all reputation for downtime. But it should recover fast when you will bring it online. There is only risk, that the disqualification for downtime could be enabled during this time.

No. The authorization token is one-time token, it can’t be used twice. And it’s needed only to sign the generated identity. The identity + data is your node.
You need a different identity for each node and own authorization token for each of them. You can obtain a new authorization token easily on the same email address, but once a day.

You can run several nodes behind the same public IP but they will not receive more traffic than only one node. Moreover the vetting process will be longer in the same amount of times as a number of nodes behind the same public IP.

No. Each node have an own audit rate and reputation.

Please, do not try to run a clone of the node - its identity will be disqualified pretty fast. Each of clone will not have data from the other clone and vice versa.

2 Likes

Thank you for your time Alexey, it’s very appreciated :slight_smile:
Your explanation makes me have some other questions… I hope this won’t bother you too much :slight_smile:

Just ot be sure that I understand well the disqualification: Once a node is disqualified, this node can be considered as “outside of the network”, right? So there is no way to use it again and I have to create a new identity, right (even if it’s with the same public IP)?

Based on your explanation about using nodes behind the same public IP, I understand that it’s not a recommended pattern, right?
In my point of view, this pattern is interesting only if you have 2 different computers with their own storage caapcity and you can’t “agregate” the whole storage capacity for 1 computer only. Am I right?

Actually, I was planning to follow the procedure of migration a SNO (as described in the official FAQ and in this post). I don’t plan to run both nodes at the same time.

Thanks again for your time!

I am not sure if we are going to keep it disabled for 6 month. It might be better to assume that we have enabled it by then and search for a different solution.

Graceful exit should be available soon. That option will be on the table and I think it is the best option for you. You will get all the holdback payout and even if you have problems with your new internet connection there would be no risk for you. The downside is that you will have to join the network with a new node. Full vetting process and on the first level for holdback percentage.

Thanks. Since I will make a graceful exit, it’s not necessary to move data from the previous storage node to the new one, right? I will not need either to re-use the previous identity, right?
Your recommendation is to create a new node, from scratch, that’s it?

Graceful exit will transfer all your data to other nodes and then exit the node from the network. So you don’t have to copy anything. You’d be basically stopping and then starting a new node after the move.

Another option based on what you described is to do 2 moves. Move your node to a friend first, by following the instructions. Then after you’ve moved you move the node from your friends location to your location. The upside is that you don’t have to start over with no reputation, 0 data, vetting and new escrow held back amounts. The downside is that you have double the risk of doing something wrong during the moves. You’ll still have some down time and you will have to take into account how long it will take to move the data from one location to the other as that will at least cause some down time, which should remain below the threshold that’s likely in place by then.

Because of these complications, I agree with @littleskunk that the safest bet is probably to graceful exit and start over.

If you want to do that, you should take a backup of your node with you including your identity. This way rsync won’t take that long. Then follow the instructions. Make sure you shutdown the old node for good before you start the new node and never ever start it again.

1 Like

STORJ Needs to be more lenient on downtime for individual nodes. Many SNOs are individuals with few drives for decentralization. Nodes will go offline and operators may not be able to recover them for a few days to a few weeks because of vacation, access and other events.

However, as soon as the problem is resolved the node should return online and be stable. This happened to me and as soon as I resolved the issue the node was back online and functioning at 100%. If anything STORJ satellites should track the downtime and DQ a node based on the oldest time since last connected and when the number of offline nodes come close to the threshold of n-node minimum error correction and recovery number.

I am happy DQ is disabled because I think I would have been eliminated if it was. My data is still here and is passing current audit checks. I have several hundred gigabytes of valued data. Over time and as the system improves and statistics are developed maybe the downtime window can be narrowed more as nodes become more stable with software releases.

This topic is a good example of a SNO who will move, shut down for a few days to a few weeks and then start up again. Isn’t this downtime not as severe as the cost of rebuilding the lost data from that node?

How does a SNO benefit from a graceful exit vs DQ? I thought I read somewhere that even with graceful exit the SNO wouldn’t gain much. If need be STORJ should incentivize a graceful exit which I think would eliminate this moving issue. This person would graceful exit then restart new node after the move is complete. What’s the cost per GB to rebuild the data from a lost node vs. cost of a graceful shutdown?

1 Like

This sounds like a pretty good idea. I think it may be problematic that your node holds so many pieces, it likely will not take too long until the first piece for which your node is the longest one offline would need repair. But I would say it’s worth looking into.

You could also choose to repair data using funds from escrow from the offline node. That way you don’t have to DQ a node right away, but only after a significant chunk of escrow is consumed for repairs. Once the node is back online you could drop it back to a previous escrow level based on how much money is used for repairs in the mean time. So if you’re in the 100% payout 0% escrow period, you drop back to 75% payout 25% escrow, to recover the now used escrow amount.

Both option require quite a bit of engineering to make it happen, but I think it’s doable. The question is, is it worth putting the effort into? That engineering time could be spent elsewhere.

You get back all the money in escrow. This can be a significant amount. So it’s definitely worth going with graceful exit over just leaving.

For Storj, there is almost no cost for a graceful exit as your node would transfer data directly to other nodes and no repair is needed. Since that egress traffic isn’t paid, the only cost Storj would have is the traffic with the satellite to coordinate these transfers. But that is probably negligible.

5 Likes

Currently DQ for downtime is disabled. More than that is not possible. It is the other way around. At some point we have to DQ the bad nodes. We can’t tollerate a bunch of nodes going offline every day for 8 hours doing night time. The plan is to implement a job for tracking the downtime and later decide which nodes are bad for our network. It is too early to talk about the rules because we don’t have the data to decide which rules would work.

We are rebuilding the data anyway because by the time we hit the repair threshold we can’t risk to wait for your node to come back online. → DQ or not DQ is a question of educating the storage nodes long term but doesn’t change our costs or durability (as long as we have only a few bad nodes).

5 Likes