When will "Uncollected Garbage" be deleted?

Yes, you can do it, but it will result in not updating the databases with that deleted data, so your node will still believe that’s full. To make it correct you would need to restart the node to allow the used-space-filewalker update the databases with a correct stat.
Not sure, that’s worth it.

This suggestion

could help and avoid to re-run a used-space-filewalker.

now that we have a lot of actual trash, is this line in the config.yaml still live?

storage2.delete-queue-size: 10000

… and does it affect trash emptying or is it something else?

And yeah occassionally an “emptying trash” task can take hours and hours, especially if it’s a ton of files (sometimes more than 10,000)

To the mooon

1 Like

Screenshot_2024-09-03_100444

:frowning_face:

1 Like

read and understand the graph next to it.

There are 22k files moved into each trash folder and now deleted in parallel on my node.
It takes 30 min to move this 22k file from blobs folder to the trash folder and then 25 min to delete… and it is multiplied by 1024 times… actually there are 270 folder left to move and some 900 to delete…
The node is on a WD101EFBX drive. It seems the badger cache does not help here… :frowning:

That may be only for the collector, a limitation on a per hour interval (who’s value is as also set in the config.) - Not sure, haven’t looked at the code. Credit possibly: @Alexey, I think it’s something he mentioned that variable did, a few months ago.

1/4 cent

Well, I was doubtful about this for some reason.

Woke up this morning to 1.16TB free.

1 Like

sorry what is the screnshot? green line is free space on the storj network getting much big from all the deletes?

yes!
There was a lot of trash, besides the ttl which is expiring.

please check the graph again now. It looks like it was a temporary glitch in reporting usage on US1 which is fixed now.

2 Likes

Let’s see… it looks like we’ve roughly gone from 52 to 58, which means we’ve cleaned up a neat 6 PB of trash! :crazy_face:

Screenshot_2024-09-03_184924

Yes, much better!

There are two parameters which used in the Deleter

      --storage2.delete-queue-size int                    size of the piece delete queue (default 10000)
      --storage2.delete-workers int                       how many piece delete workers (default 1)

However it’s used in the direct deletions from the satellite/uplink. Not sure that it’s used in the trash filewalker, at least I didn’t find it in the code: storj/storagenode/pieces/trashchore.go at c8113bdc2fe66d58ad7eb33ade53854252fb1e2e · storj/storj · GitHub
It also seems not used in the TTL collector, too.

1 Like

and direct deletions from the satellite are not a thing that happens any more, right?

Yes, even from libuplink (by default). I believe that this feature is disabled on all satellites.
But perhaps you can still call it explicitly in your code using libuplink.

4 posts were split to a new topic: When the TTL data registration issue will be released?

Since the bug was fixed, my trash has been growing day by day. Today, +1 TB of overused has appeared. Is this the expected behavior? I don’t have any folders older than 7 days in the trash folders.

That is multinode dashboard, yes? If not, I don’t understand how you can have free and overused at the same time?

Yes, multinode. The overused appeared last night. I had free space all the time.