Reduce storage / partial graceful "exit"

My understanding of the current in-progress graceful exit feature is that it only covers a total removal of a node.

Since the idea of Storj is to make use of unused storage, it would be nice to add a feature that allows the disk space used by storagenode to be reduced without leaving the network.

Example: Someone has allocated 2TB to the network and 1.5TB is used. This person now needs 1TB of that 2TB disk space back and would like to allocate only 1TB to the network, reducing the node’s storage usage by 0.5TB. This cannot currently be done; they need to execute a graceful exit, obtain a new identity, and set up a new 1TB node. Obviously this means going through the vetting process again and losing all existing monthly income.

When storagenode starts, it would be great if it would look at the amount of disk space the operator has requested to allocate to it (the STORAGE environment variable) and, if the consumed disk space is greater than this, have some mechanism to transfer excess pieces to other nodes without being penalized.

At the end of the process, the node’s storage would be full, but it would not need to start over from scratch or lose any funds in escrow.

3 Likes

Partial exit is not yet implemented but it will be :soon:

2 Likes

It is a one way ticket. If you call graceful exit you will unsubscribe from that satellite and never get any data from it. No way back.

In the design doc partial graceful exit as well as rejoining a satellite that was previously exited without having to start a new node is mentioned in the Business requirements:

When a Storage Node operator wants to reduce the amount of storage space they have allocated to the network I want them to have the ability to do a “partial graceful exit which would transfer some of the data they currently have onto other nodes so they have a way of reducing their storage allocation without just deleting data and failing audits.

  • In this situation the satellite will determine which pieces are removed from the node NOT the storage node.

When a Storage Node wants to rejoin the network I want them to have that ability so that they do not need to generate a new node ID via POW, go through the node vetting process, and so they can utilize their reputation.

3 Likes

Oh, neat. Glad to hear that this was thought of already.

1 Like

Hi, is there any update on the partial graceful exit or a planned implementation date?

I would like to reduce the size of one of my nodes and couldn’t find any working solution :slight_smile:

No date, no design document and no ticket in the backlog. I would expect nothing in the next months.

1 Like

My idea for working solution at this time: Reduce the space (-e STORAGE=“xxx”) and perform a graceful exit on one or two satellites.

The used space will be shown while the ge procedure.

Edit: Or wait a few months. :wink:

1 Like

That’s a good idea. How would you mitigate the risk of SNOs to abuse this feature to improve their egress traffic?

SNO wont have control which blobs to remove so he can as well lose high traffic blob. So there is no benefit in constantly increasing and decreasing your allocated storage

There might be no direct control over single files. I was thinking more of an SNO who is unhappy with the amount of egress traffic he has. He could be tempted to try to receive more popular data by reducing his storage and increasing it again. If such happens frequently such node must be considered unrealiable.

For some days after a partial graceful exit, the satellites could consider the node’s storage to be the maximum of the requested limit during the partial exit and the figure reported from the node itself – effectively locking the maximum amount of data the system will store on the node for a period of time. 30 days would probably be sufficient to discourage this kind of thing since you have to wait an entire month before your node would start filling back up.

On the other hand, people who need to reduce their storage use because they are migrating to a smaller drive or they need the space for something else would not be affected by this since (presumably) it is a semi-permanent change for them.

2 Likes

Are there some news in this topic?

For couple of nodes i’ve set to big amount of data and have only 2-3% of free space. Is there any way to reduce stored amount and have more free space?

Maybe should change that in docker startup command and thats it ?

Yes, if you lower the declared capacity of your node in the docker run command, your node will stop accepting new data until the amount of used space falls below that threshold. You’ll have to wait for customers storing data on your node to delete some data; you can’t currently force data to be removed from your node.

2 Likes

If this gets implemented could we keep growing and shrinking our node until we get files that people download more frequently?

How will you determine that ?

Keep doing it until bandwidth goes up?
You could probably see the traffic related to each bit you have on your disk if you built a tool for it right?

Then any SNO will do exactly that. It’s not good for the customers. The network designed to handle all type of usage - for backups or using as a dynamic storage or even CDN.
So, I do not think it will be implemented like this.

It could probably be implemented in a way that is useful for legitimate cases but not effective for illegitimate ones. For example, satellites could prevent a node from increasing its capacity above the threshold set for the PGE for 90 days after the PGE.

SNOs who need to shrink their node for their own reasons can do so. It’s unlikely that a legitimate use case would involve expanding the node within 3 months of shrinking it.

Trying to game the system with this feature means you’re locked out of growing the node for 90 days. This is 90 days of lost storage revenue and would not be attractive when the goal is increasing revenue.

I’d flip that around and only let nodes shrink once every 6 months. Since raising the node size doesn’t require anything more than a settings change, but shrinking it requires you to trigger a partial GE.

So for (partial) GE the current requirement for the node to be 6 months old would simply be expanded with no partial GE in the past 6 months either.