Yes, you can do it, but it will result in not updating the databases with that deleted data, so your node will still believe that’s full. To make it correct you would need to restart the node to allow the used-space-filewalker update the databases with a correct stat.
Not sure, that’s worth it.
This suggestion
could help and avoid to re-run a used-space-filewalker.
There are 22k files moved into each trash folder and now deleted in parallel on my node.
It takes 30 min to move this 22k file from blobs folder to the trash folder and then 25 min to delete… and it is multiplied by 1024 times… actually there are 270 folder left to move and some 900 to delete…
The node is on a WD101EFBX drive. It seems the badger cache does not help here…
That may be only for the collector, a limitation on a per hour interval (who’s value is as also set in the config.) - Not sure, haven’t looked at the code. Credit possibly: @Alexey, I think it’s something he mentioned that variable did, a few months ago.
There are two parameters which used in the Deleter
--storage2.delete-queue-size int size of the piece delete queue (default 10000)
--storage2.delete-workers int how many piece delete workers (default 1)
Yes, even from libuplink (by default). I believe that this feature is disabled on all satellites.
But perhaps you can still call it explicitly in your code using libuplink.
Since the bug was fixed, my trash has been growing day by day. Today, +1 TB of overused has appeared. Is this the expected behavior? I don’t have any folders older than 7 days in the trash folders.