My understanding of the current in-progress graceful exit feature is that it only covers a total removal of a node.
Since the idea of Storj is to make use of unused storage, it would be nice to add a feature that allows the disk space used by storagenode to be reduced without leaving the network.
Example: Someone has allocated 2TB to the network and 1.5TB is used. This person now needs 1TB of that 2TB disk space back and would like to allocate only 1TB to the network, reducing the node’s storage usage by 0.5TB. This cannot currently be done; they need to execute a graceful exit, obtain a new identity, and set up a new 1TB node. Obviously this means going through the vetting process again and losing all existing monthly income.
When storagenode starts, it would be great if it would look at the amount of disk space the operator has requested to allocate to it (the STORAGE environment variable) and, if the consumed disk space is greater than this, have some mechanism to transfer excess pieces to other nodes without being penalized.
At the end of the process, the node’s storage would be full, but it would not need to start over from scratch or lose any funds in escrow.
In the design doc partial graceful exit as well as rejoining a satellite that was previously exited without having to start a new node is mentioned in the Business requirements:
When a Storage Node operator wants to reduce the amount of storage space they have allocated to the network I want them to have the ability to do a “partial graceful exit which would transfer some of the data they currently have onto other nodes so they have a way of reducing their storage allocation without just deleting data and failing audits.
In this situation the satellite will determine which pieces are removed from the node NOT the storage node.
When a Storage Node wants to rejoin the network I want them to have that ability so that they do not need to generate a new node ID via POW, go through the node vetting process, and so they can utilize their reputation.
SNO wont have control which blobs to remove so he can as well lose high traffic blob. So there is no benefit in constantly increasing and decreasing your allocated storage
There might be no direct control over single files. I was thinking more of an SNO who is unhappy with the amount of egress traffic he has. He could be tempted to try to receive more popular data by reducing his storage and increasing it again. If such happens frequently such node must be considered unrealiable.
For some days after a partial graceful exit, the satellites could consider the node’s storage to be the maximum of the requested limit during the partial exit and the figure reported from the node itself – effectively locking the maximum amount of data the system will store on the node for a period of time. 30 days would probably be sufficient to discourage this kind of thing since you have to wait an entire month before your node would start filling back up.
On the other hand, people who need to reduce their storage use because they are migrating to a smaller drive or they need the space for something else would not be affected by this since (presumably) it is a semi-permanent change for them.
For couple of nodes i’ve set to big amount of data and have only 2-3% of free space. Is there any way to reduce stored amount and have more free space?
Maybe should change that in docker startup command and thats it ?
Yes, if you lower the declared capacity of your node in the docker run command, your node will stop accepting new data until the amount of used space falls below that threshold. You’ll have to wait for customers storing data on your node to delete some data; you can’t currently force data to be removed from your node.
Then any SNO will do exactly that. It’s not good for the customers. The network designed to handle all type of usage - for backups or using as a dynamic storage or even CDN.
So, I do not think it will be implemented like this.
It could probably be implemented in a way that is useful for legitimate cases but not effective for illegitimate ones. For example, satellites could prevent a node from increasing its capacity above the threshold set for the PGE for 90 days after the PGE.
SNOs who need to shrink their node for their own reasons can do so. It’s unlikely that a legitimate use case would involve expanding the node within 3 months of shrinking it.
Trying to game the system with this feature means you’re locked out of growing the node for 90 days. This is 90 days of lost storage revenue and would not be attractive when the goal is increasing revenue.
I’d flip that around and only let nodes shrink once every 6 months. Since raising the node size doesn’t require anything more than a settings change, but shrinking it requires you to trigger a partial GE.
So for (partial) GE the current requirement for the node to be 6 months old would simply be expanded with no partial GE in the past 6 months either.