I think I have a trash accounting problem

I have several nodes that currently do not get ingress because they appear full from used space + free + trash.
The trash deleter is working and deleting date folders in the trash but for somehow for weeks if not months the trash figure keeps going up never down.

So something does not seem right. Is there a simple way to completely reset the trash count and recount what is actually in the trash? It really looks like something is wrong with the database but it would be a pain to perform the recreation procedure for each node. So maybe someone has a better idea?

I solved it by throwing away the folders older than 2 weeks by hand…


This might be related to a new bug in the filewalker: High Trash usage - 27 TB - no more uploads - #29 by Alexey

I’m sure this is not recommended, but, on my raspbery pi, if you want the node to think you have zero trash, the following seems to work:
first stop the node
In your config file disable the file walker with setting storage2.piece-scan-on-startup: false
Go into the directory that contains your databases, make a backup of your piece_spaced_used.db for safety. Run the following command:

sudo sqlite3 piece_spaced_used.db "UPDATE piece_space_used SET total = 0 WHERE satellite_id = 'trashtotal';"

cross your fingers and start the node.

On windows you can edit the database with a graphical program like DBeaver.

I’m not sure this is really any easier than the recreation procedure you are trying to avoid but at least you can skip running the file walker.

Again, not recommended, only attempt if you enjoy tinkering and possibly breaking things…

Or, you can do the work around that Alexey describes in the link I posted. This should allow the file walker to run successfully which should update your trash values to what is actually on your disk, but that’s basically another database recreation procedure that you said you want to avoid.

The usual restart with enabled scan on startup should fix the issue.
If it didn’t, then could you please try a workaround for the used space calculation?

I can imagine that this solution might even work in some cases. I might have different multiple issues on different nodes maybe.

Not really. but maybe as a last resort. First I think I need to understand more where the problems are. A closer look has revealed the nodes might have different multiple issues.

Ok so a normal complete startup scan will also scan the trash dirs? I wasn’t sure of that.

I found some issues so far:

  1. On one node the trash deleter was not able to delete s piece file with a seemingly corrupted name. So the subfolder was not empty and the date folder did not get deleted. I have manually deleted the corrupted file and maybe that will get the node back on the track. Could it be that the deletion process does only delete piece files and nothing else? And if retain moves a weird piece into a subfolder then the deletion will no longer work?
  2. The nodes might be simply slow. I have seen some context canceled errors for the used space walker. The question is, how is this error handled? Will it still stop-resume or will it in such a case start from the scratch?
  3. Badger and startup scan is enabled an all of these nodes
  4. Log lines for used-space filewalker look normal. I have a starting and a completing line for each satellite.
  5. Now on one node I have this: 2024-11-26T18:54:03Z INFO pieces:trash emptying trash started {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"} which is the last trash emptying starting line. Not finished yet. Does this mean it is running already for almost 2 days and can I expect that normally the trash space would be updated when I see the line that it is finished?

Still it could be an issue of the introduced bug that was posted. However I am having the issue that the trash does not really get down to an acceptable level for quite some months now and I was hoping it will fix itself when all the retaining and trash deletion will be less.

yes.

Kind of expected.

Likely yes, because it should calculate the occupied space by these pieces to subtract it from the used space on deletion. Why should it account random files? There is no separate processes to calculate used space by the trash and no another one loop to delete these files, it’s a one loop to speedup a process. The folder deletion is a final step but only if it’s empty.

These ones likely would be repeated for a non-lazy used-space-filewalker. This is confirmed by you too:

Perhaps. You can be sure if you would check the /mon/ps endpoint on a debug port.

You may try to remove them with a command:

find /mnt/x/storagenode/storage/trash -type f -ctime +8 -print -delete

Perhaps it would be faster, then restart the node to run a scan on startup.

It looks like the process was as following. Bloom Filter-Retain found a file with a corrupted name that obviously was not there. It moved it into the respective trash-date-subfolder. And now the trash deleter cannot finish, because it will not delete this pieces and errors the folder is not empty. This does not makes sense for me. The trash deleter should delete anything that is deleteble it finds in the trash folder.

Well the question is, will it resume where it was as we have the save state resume feature or will it break on error not save and start from beginning. This is a huge difference.

So there are no free space updating that process? I mean if it takes 5 days from start to finish is there no updating that free space is available for 5 days then?

That may be interesting. Because we are still seeing crazy amounts of folders in their processing orders waiting to be deleted:

"datefolders":{"pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa":{"names":["2024-11-22","2024-11-26","2024-11-25"],"count":3},"qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa":{"names":["2024-11-26","2024-11-23","2024-11-25"],"count":3},"ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa":{"names":["2024-11-22","2024-10-15","2024-10-14","2024-11-15","2024-10-02","2024-09-05","2024-07-22","2024-08-23","2024-09-30","2024-08-18","2024-08-31","2024-10-05","2024-08-06","2024-08-30","2024-08-08","2024-09-02","2024-07-07","2024-10-12","2024-08-26","2024-08-14","2024-08-07","2024-07-15","2024-11-09","2024-07-12","2024-07-21","2024-08-17","2024-08-22"],"count":27},"v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa":{"names":["2024-11-23","2024-11-28","2024-11-24"],"count":3}}}

Do you have an evidence, that it didn’t continue to delete other folders?
I also think that corrupted piece may appear in the trash folder itself, not only in a blobs folder.

It should. But probably it has a bug:

It should update the usage when it’s finished the batch deletion and flushed to the database after 1 hour by default. However, if there was an interruption in the middle of the process, I think the progress could be lost and you would have a discrepancy there, which should be fixed by the used-space-filewalker enabled on startup. For your system I would recommend to never disable the scan on startup, because seems your setup is unstable.

1 Like

This is the error I saw:

2024-11-26T18:51:01Z    ERROR   piecestore:cache        error getting current used space for trash:     {"Process": "storagenode", "error": "filestore error: failed to walk trash namespace a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec0000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec0000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:296\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:102\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-11-26T18:51:01Z    ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06/xu: directory not empty; remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06: directory not empty; context canceled; context canceled", "errorVerbose": "pieces error: filestore error: remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06/xu: directory not empty; remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06: directory not empty; context canceled; context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash:226\n\tstorj.io/storj/storagenode/blobstore/statcache.(*CachedStatBlobstore).EmptyTrash:114\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:467\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:462\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:86\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}

The xu directory was the one that had the piece with the corrupted name in it. The xu folder was the only folder left in the date folder but of course did not get deleted. What I don’t know is how the app handles this. I may not record the freed space in the database if the process did not finish without error.

I believe it appeared in the blobs folder and then got moved into the trash folder.

Not good. Ok let’s see if it updates when it finishes one day.

Now that’s hilarious
2 cents

1 Like

Then I suppose that all other expired folders get deleted.

It should flush the cached calculated usage to the database after a hour.

Do you have the same problem?

So far no change. Still trash value unchanged on this one. Emptying still runnning:

2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

Currently it appears like that the database only get updated after a datefolder has been deleted. So if it takes a long time while deleting the subfolders, no updates of free space are happening. If that’s the case then your node can be sitting for a week claiming to be full while in reality it has empty space and could receive ingress.

But let’s wait for the process to finish and check if then the value gets down.

You may try to change how much pieces it should delete in the batch:

      --storage2.delete-queue-size int                           size of the piece delete queue (default 10000)
      --storage2.delete-workers int                              how many piece delete workers (default 1)
1 Like

At least from what I see on this node it seems it got stuck in a nasty loop:

2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "232.891µs"}
2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "127.134µs"}
2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "140.776µs"}
2024-11-24T10:53:53Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-24T18:48:55Z    ERROR   piecestore:cache        error getting current used space for trash:     {"Process": "storagenode", "error": "filestore error: failed to walk trash namespace a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec0000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec0000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:296\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:102\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-11-24T18:49:02Z    ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06/xu: directory not empty; remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06: directory not empty; context canceled; context canceled", "errorVerbose": "pieces error: filestore error: remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06/xu: directory not empty; remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06: directory not empty; context canceled; context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash:226\n\tstorj.io/storj/storagenode/blobstore/statcache.(*CachedStatBlobstore).EmptyTrash:114\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:467\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:462\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:86\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-11-24T18:55:11Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-24T18:55:12Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "458.627229ms"}
2024-11-24T18:55:12Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-24T18:55:12Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "65.573256ms"}
2024-11-24T18:55:12Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-24T18:55:12Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "60.062514ms"}
2024-11-24T18:55:12Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-11-24T19:36:25Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 4327, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 157398, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "41m13.83403172s", "Retain Status": "enabled"}
2024-11-24T20:14:00Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 37196, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 188676, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "37m34.207028901s", "Retain Status": "enabled"}
2024-11-24T22:12:28Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 70, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 9706, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h20m15.784670106s", "Retain Status": "enabled"}
2024-11-25T23:23:28Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 321, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 9668, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h52m29.905504868s", "Retain Status": "enabled"}
2024-11-26T18:09:58Z    ERROR   db      failed to stat blob in trash    {"Process": "storagenode", "namespace": "ootPBOELroXWf0xsuCv41MDw9HqOpyYnUk3rbsAAAAA=", "key": "wpjZCFUf05FnLoGsz57e7BbYrEqeZ7t0uIk2EaQuQDM=", "error": "lstat storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-14/yk/mnsccvd7jzczzoqgwm7hw65qlnrlcktzt3w5fyre3bdjboiazq.sj1: no such file or directory"}
2024-11-26T18:09:59Z    ERROR   db      failed to stat blob in trash    {"Process": "storagenode", "namespace": "ootPBOELroXWf0xsuCv41MDw9HqOpyYnUk3rbsAAAAA=", "key": "wooYpHCQjxRC1qrWKVVwwYpOihtyV0LZtgpEqHO32p0=", "error": "lstat storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-14/yk/fbrjdqschriqwwvllcsvlqygfe5cq3ojlufwnwbjckq45x3koq.sj1: no such file or directory"}
2024-11-26T18:09:59Z    ERROR   db      failed to stat blob in trash    {"Process": "storagenode", "namespace": "ootPBOELroXWf0xsuCv41MDw9HqOpyYnUk3rbsAAAAA=", "key": "wqhluYKfB+t0+afdpGyhZSdX6zWDM+HO3u/2zGPR3zo=", "error": "lstat storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-14/yk/uglomct4d6w5hzu7o2i3fbmutvp2zvqmz6dtw6573myy6r345a.sj1: no such file or directory"}
2024-11-26T18:09:59Z    ERROR   db      failed to stat blob in trash    {"Process": "storagenode", "namespace": "ootPBOELroXWf0xsuCv41MDw9HqOpyYnUk3rbsAAAAA=", "key": "wo7NkPqkuzQQVIzwly6o+aaRBMXHRvTPcWX47nDYbug=", "error": "lstat storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-14/yk/hm3eh2us5tiecurtyjolvi7gtjcbgfy5dpjt3rmx4o44gyn3ua.sj1: no such file or directory"}
2024-11-26T18:09:59Z    ERROR   db      failed to stat blob in trash    {"Process": "storagenode", "namespace": "ootPBOELroXWf0xsuCv41MDw9HqOpyYnUk3rbsAAAAA=", "key": "wpHqfBSRfKIItHS3RfVWf4Nj6vzsvpmbZhQaG0+4oCY=", "error": "lstat storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-14/yk/i6u7ausf6kecfuos3ul5kwp6bwh2x45s7jtg3gcqnbwt5yuata.sj1: no such file or directory"}
2024-11-26T18:09:59Z    ERROR   db      failed to stat blob in trash    {"Process": "storagenode", "namespace": "ootPBOELroXWf0xsuCv41MDw9HqOpyYnUk3rbsAAAAA=", "key": "wpLkfirEM5U0hCzQkeUsDAWZQOVP1yqZY5XUwMLP7ZY=", "error": "lstat storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-14/yk/joi7rkyqzzkneeftijdzjmbqczsqhfj7lsvgldsxkmbqwp5wla.sj1: no such file or directory"}
2024-11-26T18:51:01Z    ERROR   piecestore:cache        error getting current used space for trash:     {"Process": "storagenode", "error": "filestore error: failed to walk trash namespace a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec0000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec0000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:296\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:102\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-11-26T18:51:01Z    ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06/xu: directory not empty; remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06: directory not empty; context canceled; context canceled", "errorVerbose": "pieces error: filestore error: remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06/xu: directory not empty; remove storage/trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-11-06: directory not empty; context canceled; context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash:226\n\tstorj.io/storj/storagenode/blobstore/statcache.(*CachedStatBlobstore).EmptyTrash:114\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:467\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:462\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:86\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "1.188554ms"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "3.129672ms"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash finished {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "214.867µs"}
2024-11-26T18:54:03Z    INFO    pieces:trash    emptying trash started  {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}

It seems that when it could not delete the non-empty directory it started from the scratch with the first satellite until it reached the same directory again and restarted again.

I don’t know what that batch is for. Is it the batch after which the database gets updated? Or what does it do?

I thought that you have deleted this already?

I see. Then I would suggest to remove the remained files in that subfolder.
However, do you see other expired trash date-subfolders of that satellite?

Yes, but this is also the log from when it was not yet deleted. It shows that it tried to delete, failed, restarted and tried again.

I did that. But my suggestion would be that the node should delete everything it finds in the trash by itself.

These are the folders for that node and satellite: "ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa":{"names":["2024-11-28","2024-11-06","2024-11-14","2024-11-22","2024-10-19","2024-10-18","2024-10-21"],"count":7}
It is right now deleting in the 14-11-2024 folder.

1 Like

Please, submit it as a GitHub issue. It’s a low priority affecting 0% of nodes, so maybe somewhere in the future it could be implemented. Or maybe a Community would fix that.
I cannot reproduce the issue either on my nodes, or on storj-up, so it sounds like would be never fixed by the team…

Likely it’s a batch to delete something … in a batch? The database update is some an async process, so the databases (on the disk) would be updated after a one hour since the modification (by default).

Sure I’ve seen that before; there is/was a file contention issue in windows that mainly affects the security descriptors in Windows version, which can cause and inability to delete files. However, I primarily was just amused by the de ja vue, that you guys have had an extensive conversation about this about two months ago.

Cheers
2 cents
Julio