Bloom filters not deleted?

Looks finished:

docker logs storagenode | grep retain
2024-07-19T12:12:58Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 1975606, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 8720564, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "71h3m40.840352135s", "Retain Status": "enabled"}
ls /trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
2024-07-13  2024-07-16
ls /trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-07-16/ | wc -l
1024

Files present in all subfolders. But Bloom filter still there, even 2 of them:

ls config/retain/
pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa-1720807199998472000
pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa-1720979999999803000

Does this look correct?

For me - yes. I believe, they are still processed, if we consider the limit to process a one BF until it’s finished. Because my nodes doesn’t have old BF files after they are processed. However, they are not restarted so often.

However, I insist that you need to join a

Your nodes are unique, they discovers so many issues, that it should help to improve the process.

Interesting.

$ ls -l /mnt/x/storagenode2/retain/
total 13416
-rw-r--r-- 1 root root 6479043 Jul 13 19:00 pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa-1720634399999243000
-rw-r--r-- 1 root root 6976478 Jul 20 23:41 pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa-1721228882483066000.pb
-rw-r--r-- 1 root root  276813 Jul 19 08:55 qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa-1721325594201915000.pb

The extension is now different. This node has been upgraded recently to 1.108.3
and

$ grep retain /mnt/x/storagenode2/storagenode.log | grep -E "Prepar|Move"
2024-07-06T03:15:09Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 6437842, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-06T07:22:00Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 884932, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-07T01:19:56Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 565382, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2107612, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "17h57m55.4957687s", "Retain Status": "enabled"}
2024-07-07T02:44:16Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 798418, "Failed to delete": 1, "Pieces failed to read": 0, "Pieces count": 12015492, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "23h29m6.3105639s", "Retain Status": "enabled"}
2024-07-09T19:15:29Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 6496965, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T04:22:55Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 6491450, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T04:24:20Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 6491450, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T19:51:58Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 1109366, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2527165, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "15h27m37.8868166s", "Retain Status": "enabled"}
2024-07-19T01:40:57Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-07T17:59:59Z", "Filter Size": 6479043, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}

so, the BF from 2024-07-13 for the SLC satellite perhaps is processed still.

I don’t know. Subfolders have already files in them:

ls trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-07-13/ww/ | wc -l
3308

And the trash cleanup is already running on them deleting the files:

ls trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-07-13/dz/ | wc -l
0

Please, search for a not deleted BF files instead.