Hi everyone,
I decided to reduce the number of nodes in my network. I’m not in a hurry, so I decided to not use the graceful exit method. Instead I changed the Disk size to 0 TB, to prevent ingress. The is working fine, but I don’t see a reduction in Average Disk Space Used. My interpretation was, as long as the file aren’t deleted or expired, no change will be visible. But how to explain the (slight) increases in the graph?
As I understand it, there can be some delayed transfers that can trickle in, but then it should stop adding data as everything catches up. Minus any logging you have on that drive, if any. Not sure about trash restore and if it respects the drive space limit or not.
there have been some undeletes of trash this month so maybe that’s it.
also note that your storage will never go to zero. some of that data the customers are going to leave up indefinitely. So if you really want to reformat that disk you will eventually need to graceful exit or just quit.
On the graph, I think the slightly higher value for the most recent day is normal. It often does the same thing on my nodes.
Agree with Mark. Graph is like waggling a rubber cane
If you don’t do GE, you loose the collateral.
I understand. But I also dont want to loose the data. I have now one node with free space taking al the ingress. As soon as the others are near empty, I’ll do GE.
But why? If you don’t need to reclaim space now — keep running nodes normally. If you need space back — rm -rf the node of the right size, you get space back immediately.
What are those dances with stoping ingress, then waiting, then GE are accomplishing?
You may start GE right away, the node wouldn’t transfer data anymore, it just need to be online for the next 30 days to successfully finish GE.
Let me explain my situation a bit more. I have 7 nodes on 7 harddisks running on 4 computers al behind one ip. 56 TB of total capacity of which 16 TB is in use. This 16 TB is stable now for several months. I decided to bring my setup down to 1 computer, 2 nodes/hdd’s with 22 TB of capacity. What is the best way to accomplish this? My idea was to: 1) stop ingress on 5 nodes. 2) When disk usages is low: GE 5 nodes. 3) shut down 3 computers. I didn’t want to start with GE because I loose all data on these nodes. Am I on the right track here?
I’d shut down five nodes, and rsync them to the same physical disk. Then I’d set their max size to their current size and power them on again.
While there is a small inefficiency, running 5 nodes smaller nodes plus a single large(er) node on a single 16TB disk compared to just running 1 node with 16TB one another disk, doing it this way is the only way forward you can currently proceed, to retain the data you already store from the nodes on other disks.
I just do not understand, why do you need to wait for the lowering of the disk usage when ingress is stopped before run a GE? GE will take 30 days independently of the size, and your node would be still paid for that time.
If you start the GE, that data will be lost anyway after the GE would be completed. So, what’s the point for a delay?
If I do GE now, I will loose all data on that node in 30 days. I don’t want to lose this data, I just want to reduce the number of computers and hdd’s running for Storj. When running a node without ingress for several weeks/months I assume disk usage will go down to a point where I ignore this data loss for me and do GE.
I’ll go for the suggestions of @Ottetal and rsync all data to one computer and two disks. From there I can wait untill data is almost gone and do GE. Thank you all!
I’m glad you understood. Reading my comment now, it makes little gramattical or logical sense - I’ve edited it to be more clear