In light of the recent wave of large deletes, I have a question (apologies if it has been asked elsewhere, I couldn’t find anything):
Say I have a 3TB disk with 2.9TB reserved for Storj (fully utilized). The last 100 GB I’m leaving empty just in case. If say a delete filter comes that leads to 200GB of data needing to be deleted - will this cause problems on the node?
Or will it move (or is it trying to copy?) those 200gb to the temp trash folder, and refuse incoming requests since there is no space left on the node until they are fully deleted 2 weeks later (?) / restored them.
Can’t say for sure, but we’ve tried to avoid having garbage collection cause problems.
The files will be moved, not copied. The trash directory has to be on the same filesystem as the blobs dir. So that’s good.
It’s only 1 week, but yeah, I’m afraid the space wouldn’t be automatically reclaimed until then.
In that case, other than unforeseen bugs, what purpose is there for the 10% recommendation of reserving the drive’s space?
Asking to see better gauge how thin I can run it
I had it very close, below 5 GB of free space on a 5.5TB volume. That was, uh, kinda an accident. Fragmentation started showing up, but except for that, it was working well. Though given the code is still young, I wouldn’t recommend it either, bugs may happen.