Hi Guys,
I am going to throw an idea out there: why doesn’t the storage node software allow a maximum percentage of free disk space instead of the current: fixed amount of data allocated to Storj?..
For example: if I set up my node by saying “my maximum Storj usage is allowed up to 10% of free disk space”. I would know that Storj will stop adding new data if the free space falls under that level~ and even better: Storj can also manage to remove some data to release space back to that level!
It will be a better metric to manage disk usage for everybody! Some people might want to keep 50% of free disk space at all times, some others 30%~ (or whatever).
Of course: you will need to add a new Egress category called “Redundancy traffic” (my suggestion) for which the node operator will NOT get paid for this swap of data to another node.
This can be coded without the need to audit each nodes’ percentage of free space: just run a cron every minute or 5 minutes on the storage node operator so that if % of free disk space < max( $free_disk_space_node_operator_setting , 10%) => then => shift data elsewhere. The decentralization is kept: the node will send its own data elsewhere, there is no need for a satellite to be involved.
Such feature will be so great: as it removes the monitoring of the storage node from the operator completely! (SET & FORGET!)
Additionally:
- This setting cannot be under 10% (that will leave time to move data if the disk starts to get filled), so that nobody can set Storj to use 99% of 100% of the disk.
- I do not know how the software copes with full disks?.. But with such feature: you include that 10% overhead rule into it and make it disappear! Because the overhead data usage will be caught into the % of free space, whatever the overhead is!
- Maybe this also helps the graceful exit routine because it is quite similar in the sense that it is shifting data elsewhere?
- Disks can be expanded (for example on a VPS/in the Cloud), so no need to modify the node’s parameter after such expansion.
- The new “Redundancy traffic” category will allow the increase/decrease of the data’s durability~ and open the door (like Amazon AWS S3) to different levels of durability on the other side of the offers (storage classes for Tardigrade), and will have the effect of increasing the price/earning of the data storage rewarded to storage node operators~