I’m here to share my feedback after having to abruptly remove my node.
Originally, I set up my node for 2TB, and this space was filled up after a couple months.
However, it turned out that I needed more space on my machine, so therefore I set the target size of my node back to 500GB.
Your systems had more than 3 months to get the data usage back under that level, but the node size hardly got smaller at all (in the end it got down to around 1.7TB).
I could no longer wait for the system to move off data, as I needed the space urgently at this point - urgently enough that the held back payout amount was not sufficient to keep me in.
I was still under the 1 year mark (when I would have the ability to request graceful exit), so my only option on reducing used space was to just delete my node altogether.
This means that your systems will need to do data repair that could have been avoided had I had the opportunity to reduce node size in a more timely manner or request graceful exit before the 1-year-mark.
Potential solutions:
Improve overuse reduction speed
Make it possible and advantageous to gracefully exit before the 1-year mark, for example, pay out a prorated sum from the held back amount.
It’s been fun to participate! Wishing y’all the best.
When you reduce the size… it really only tells the node not to accept any new data. It doesn’t move any off: any reduction is just from users naturally deleting things. Just like graceful-exit is only really a signal for Satellites to fix anything that may need your data to repair - if they don’t - then that final month can pass without any change.
Thanks for being a part of the project as long as you did! Check on us every 6 months or so: maybe if there are exciting changes you’ll come back!
This is part of the design. Nodes come and go. Don’t worry about it. I personally never bothered with graceful exit — waiting one month for $10 makes no sense — I would rm -rf nodes when I need space. If storj actually needed us to exit gracefully there would be much stronger incentives to stay.
That’s why it’s best to run multiple nodes, so you can nuke them in increments.
What would be the purpose of this self inflicted exercise in frustration? To get rid of node data faster? rm-rf is fastest.
This would also not work - with garbage collection/pruning it is neither a deterministic nor realtime process. If you need space — delete the node. If you don’t — don’t.
I would not do nothing for $10 for a month and you advocating to work for those $10/month creating elaborate schemes… come on. Your time time is better spent literally anywhere else.
I think… if a node was offline long enough for some data to be flagged as repaired-to-another-node… then online long enough for a bloom filter to tag-it-to-be-deleted… that you could shrink a node faster than just waiting for natural deletions. Keep repeating that cycle over and over.
But it would still be very slow, and you’d have to carefully manage your online score. It kinda would be a job.
I wish people would give nodes up for adoption: I’d gladly provide a good home to another 2TB! (or larger nodes can just be sold)
Yes, I agree. Could be a way out when you need to free up space, f.ex the problem posted several times, where the node is completely filled and can’t compact or similar.
Btw I can sell you a node - any size you like (upto 8TB)