No bloom filters from AP1, US1 and EU1

My nodes have seen no bloom filters from those 3 satellites for more than a week and unpaid trash is piling up every day… What is the reason for not creating bloom filters?

I have a node with a similar issue.
While I see trashing running for Saltlake, there is no evidence that it is running for the other satellites.
I don’t even have any date folders in the trash anymore on this node for AP1, US1 and EU1.
So there must not have been any Bloom filter processing for a long time now.

1 Like

So:

  1. US1, 2024-07-06T03:15:09Z
  1. EU1, 2024-07-06T07:22:00Z
  1. SLC, 2024-07-09T19:15:29Z and 2024-07-13T04:22:55Z
  1. AP1, 2024-06-29T00:23:28Z (however, my nodes stores only 200.17GB for AP1 and there are probably no deletes/garbage on my nodes, since the blobs for it are occupies closely to that, but I would check also on the disk)
2024-06-29T00:23:28Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 26561, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-29T00:28:17Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 2904, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 48124, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "4m49.5692658s", "Retain Status": "enabled"}

I checked, for AP1

$ find /mnt/x/storagenode2/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa /mnt/w/storagenode5/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa /mnt/y/storagenode3/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa -type f -exec du --block-size=1000 '{}' ';' | awk '{total+=$1; count++}END{print "TOTAL", total/1000000, "GB\n" "count", count}'
TOTAL 224.487 GB
count 785077

So, seems no garbage.

We are pleased that you are getting filters, but it seems that many people don’t, for whatever reason.
I now have 10 nodes, 6 have just started this month but the other 4 have only received one other filter in the last 7 days. One node from last month generally only got one other filter apart from saltlake. Something must be stuck and not working properly

Summary
/mnt/storj/node001_2021.10/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   └── 2024-07-09
β”œβ”€β”€ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β”œβ”€β”€ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa

6 directories
/mnt/storj/node002_2022.04/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   β”œβ”€β”€ 2024-07-09
β”‚   β”œβ”€β”€ 2024-07-15
β”‚   └── 2024-07-17
β”œβ”€β”€ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β”œβ”€β”€ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa

8 directories
/mnt/storj/node003_2023.12/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   └── 2024-07-09
β”œβ”€β”€ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β”œβ”€β”€ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa

6 directories
/mnt/storj/node004_2024.06/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   β”œβ”€β”€ 2024-07-11
β”‚   β”œβ”€β”€ 2024-07-13
β”‚   β”œβ”€β”€ 2024-07-15
β”‚   └── 2024-07-17
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
    └── 2024-07-18

8 directories
/mnt/storj/node005_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    β”œβ”€β”€ 2024-07-15
    └── 2024-07-17

4 directories
/mnt/storj/node006_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    β”œβ”€β”€ 2024-07-15
    └── 2024-07-17

4 directories
/mnt/storj/node007_2024.07/storage/trash/
β”œβ”€β”€ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β”‚   └── 2024-07-17
└── v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
    └── 2024-07-18

5 directories
/mnt/storj/node008_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    └── 2024-07-17

3 directories
/mnt/storj/node009_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    β”œβ”€β”€ 2024-07-15
    └── 2024-07-17

4 directories
/mnt/storj/node010_2024.07/storage/trash/
└── pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
    β”œβ”€β”€ 2024-07-15
    └── 2024-07-17

4 directories
1 Like

@Alexey What is the companys plan on bloom filter schedule? How often should they arrive if anything works?

First 4 explainations that come into my mind:

  1. Bloom filter execution on the node takes forever and the missing bloom filters are just queued up for later. Best way to check that would be the /mon/ps endpoint to see if there currently a bloom filter instance running.
  2. Bloom filter scans all pieces and doesn’t find a single piece to move. It is rare but happens sometimes.
  3. With enough time delay (I have seen it on my nodes) the bloom filter execution happens after the 7 days are over. The timestamp in the trash folder is not the execution time. It is the bloom filter creation time 7 days earlier. So it is possible to run the bloom filter 7 days late and basically delete all the evidence from disk just a minute later. It will look like there was no bloom filter. β†’ better check the log messages instead of the trash folder.
  4. Node was offline or has package drop for some reason and missed the bloom filter. I did this mistake myself some time ago. There was something off with my port forwarding rule. I somehow messed it up and some package here and there got dropped. Not enough to count as offline but I could see that my online score was slowly going down. Took me a moment to find the root cause and fix it. Similar problem might be possible with dyndns entries. I also learned that the hard way and had to switch my dyndns provider to a more stable one.

There is nothing in retain and nothing in the trash folders for those 3 satellites while slc is working as it should. So all those excuses don’t apply.

1 Like

I have an htop instance open 24/7, filtering for β€œfilewalker”. I can confirm that nodes (all of mine) did not receive timely BFs for the three satellites in the topic’s title for the past days, except for today’s EU1 one (just one filter, across the nodes).

How can I improve this?
Running a node on a Xeon Scalable, dedicated 20TB Seagate Exos, DC600M as LVM cache and yet, the walk took 44 hours for Saltlake on July/15th and 33 hours on July/12.
Would moving the node to all flash improve this significantly, or might this be a CPU limitation or not enough of RAM? This machine I believe has 768GB of RAM installed, but unfortunately it isn’t all available just for storagenode.
Or maybe doing a mirror with two or more 20TB drives would improve the read speeds?
Also, there were days with no retain runs, and despite free for filewalker days, I also do not see any EU retain for the past two weeks. And the last one didn’t account for >20% of the data that was removed from EU sat at the end of the June (as it was generated before), which makes it now almost three weeks of unpaid EU trash.
But if this can be improved by beefing up the server please let me know how exactly, have some €€€ laying around I can throw at it, but I have to be certain it won’t be an useless investment.

Thank you for your thoughts.
I use a script from a friend that reads out the status of the filewalker, so I can see which one has been running since the nodestart or since when it is finished.

Summary
═══ Node 01 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      6.29 $        Uptime:  10d 16h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:   11.30 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:        11.75 TB                             β”‚β”‚   SL  6d 1h ago   running     5d 7h ago   β”‚
β”‚ Unpaid Data:       5.93 TB                             β”‚β”‚  AP1  unknown     0d 16h ago  10d 16h ago β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 16h ago  4d 11h ago  β”‚
β”‚    Report Deviation: 12.30%                            β”‚β”‚  US1  unknown     1d 16h ago  4d 14h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 02 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      8.70 $        Uptime:   9d 18h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:   15.54 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:        14.09 TB                             β”‚β”‚   SL  0d 22h ago  running     0d 22h ago  β”‚
β”‚ Unpaid Data:       7.13 TB                             β”‚β”‚  AP1  unknown     0d 18h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  0d 5h ago   0d 18h ago  unknown     β”‚
β”‚    Report Deviation: 21.96%                            β”‚β”‚  US1  unknown     1d 18h ago  running     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 03 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:     10.65 $        Uptime:  10d 16h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:   19.10 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:        15.58 TB                             β”‚β”‚   SL  6d 9h ago   running     4d 7h ago   β”‚
β”‚ Unpaid Data:       7.00 TB                             β”‚β”‚  AP1  unknown     0d 16h ago  10d 4h ago  β”‚
β”‚                                                        β”‚β”‚  EU1  unknown     0d 16h ago  10d 4h ago  β”‚
β”‚    Report Deviation: 24.84%                            β”‚β”‚  US1  unknown     1d 16h ago  10d 4h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 04 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      1.14 $        Uptime:  10d 16h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    2.08 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:         2.60 TB                             β”‚β”‚   SL  1d 5h ago   0d 16h ago  10d 16h ago β”‚
β”‚ Unpaid Data:       1.06 TB                             β”‚β”‚  AP1  unknown     0d 16h ago  10d 16h ago β”‚
β”‚                                                        β”‚β”‚  EU1  0d 11h ago  0d 16h ago  10d 16h ago β”‚
β”‚    Report Deviation: 34.98%                            β”‚β”‚  US1  unknown     0d 16h ago  10d 16h ago β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 05 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.45 $        Uptime:   9d 16h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    1.46 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:         1.51 TB                             β”‚β”‚   SL  1d 4h ago   0d 16h ago  9d 16h ago  β”‚
β”‚ Unpaid Data:     261.45 GB                             β”‚β”‚  AP1  unknown     0d 16h ago  9d 16h ago  β”‚
β”‚                                                        β”‚β”‚  EU1  0d 7h ago   0d 16h ago  9d 16h ago  β”‚
β”‚    Report Deviation: 51.59%                            β”‚β”‚  US1  unknown     0d 16h ago  9d 16h ago  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 06 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.17 $        Uptime:   8d 22h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.57 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       738.75 GB                             β”‚β”‚   SL  1d 8h ago   0d 22h ago  8d 22h ago  β”‚
β”‚ Unpaid Data:     240.61 GB                             β”‚β”‚  AP1  unknown     0d 22h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  0d 3h ago   0d 22h ago  unknown     β”‚
β”‚    Report Deviation: 59.95%                            β”‚β”‚  US1  unknown     0d 22h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 07 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.18 $        Uptime:   8d 18h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.58 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       733.25 GB                             β”‚β”‚   SL  1d 10h ago  0d 18h ago  unknown     β”‚
β”‚ Unpaid Data:     221.12 GB                             β”‚β”‚  AP1  unknown     0d 18h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  0d 9h ago   0d 18h ago  unknown     β”‚
β”‚    Report Deviation: 64.73%                            β”‚β”‚  US1  unknown     0d 18h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 08 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.14 $        Uptime:   7d 22h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.50 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       650.92 GB                             β”‚β”‚   SL  1d 0h ago   0d 22h ago  unknown     β”‚
β”‚ Unpaid Data:     218.48 GB                             β”‚β”‚  AP1  unknown     0d 22h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  0d 3h ago   0d 22h ago  unknown     β”‚
β”‚    Report Deviation: 62.32%                            β”‚β”‚  US1  unknown     0d 22h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 09 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.20 $        Uptime:   9d 16h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.64 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       816.52 GB                             β”‚β”‚   SL  1d 3h ago   0d 16h ago  unknown     β”‚
β”‚ Unpaid Data:     239.50 GB                             β”‚β”‚  AP1  unknown     0d 16h ago  unknown     β”‚
β”‚                                                        β”‚β”‚  EU1  0d 4h ago   0d 16h ago  unknown     β”‚
β”‚    Report Deviation: 62.76%                            β”‚β”‚  US1  unknown     0d 16h ago  unknown     β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═══ Node 10 - Detailed information

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current Total:      0.20 $        Uptime:    7d 0h     β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated Total:    0.64 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk Used:       806.82 GB                             β”‚β”‚   SL  1d 8h ago   0d 0h ago   7d 0h ago   β”‚
β”‚ Unpaid Data:     229.91 GB                             β”‚β”‚  AP1  unknown     0d 0h ago   7d 0h ago   β”‚
β”‚                                                        β”‚β”‚  EU1  0d 7h ago   0d 0h ago   7d 0h ago   β”‚
β”‚    Report Deviation: 61.60%                            β”‚β”‚  US1  unknown     0d 0h ago   7d 0h ago   β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


═════════════════════════════════════════ All Nodes - Summary ═════════════════════════════════════════

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ NODE MAIN STATS β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ FILEWALKER ────────────────┐
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Current total:     28.12 $                             β”‚β”‚       GARBAGE     TRASH       USED SPACE  β”‚
β”‚ Estimated total:   52.41 $                             β”‚β”‚       COLLECTOR   CLEANUP     FILEWALKER  β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β”‚ Disk used:        49.19 TB                             β”‚β”‚   SL  0 running   3 running   0 running   β”‚
β”‚ Unpaid Data:      22.50 TB                             β”‚β”‚  AP1  0 running   0 running   0 running   β”‚
β”‚                                                        β”‚β”‚  EU1  0 running   0 running   0 running   β”‚
β”‚                                                        β”‚β”‚  US1  0 running   0 running   1 running   β”‚
β”‚                                                        β”‚β”‚                                           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. if you tell me exactly how I have to do this, I will be happy to do so. I currently have one GC running but 3 trash filewalkers and 1 used space with 10 nodes
  2. this is of course possible but since I haven’t had a filter for so long I don’t think this is the case this time
  3. so far I have not been able to determine this, so far the date has always been the same as the folder name
  4. the nodes have been online for about 10 days without interruption, filters from saltlake arrive at all nodes. Today, however, some of EU1 arrived, which is great :slight_smile: So unfortunately that’s not the problem in this case either.

Can you check if filters were sent that I didn’t get, for whatever reason, when I give you my id’s?

That is a lot of RAM. ZFS might be a great option for you. On my previous machine I donate about 100GB of RAM just to ZFS and it was able to run garbage collection in minutes up to a few hours at most.

Currently I am migrating to a pi5 which is on the oposit end of the spectrum. I am still working on it and can’t make recommendations yet.

2 Likes

Nice one. I like that. Just be aware that the used space numbers you see there will be off the longer the used space filewalker takes. The corresponding fix will be included in v1.109.

74e3afe storagenode/pieces: run cache persistence loop concurrently with the used space calculation

I would like to add one to the list:
5. bloom filter sender crashed. That would also give some nodes a bloom filter to work with while others don’t get one. More likely to happen on SLC and US1 because they are bigger. I would be surprised to see it on AP1.

Usually the bloom filter arrive over the weekend so between Friday and Sunday. So EU1 might be a bit early this week.

1 Like

EU1 bloom filters arrived for all nodes, nothing from US1 and AP1 so far.

However, saltlake filters came in every 2 days for some time, so it is possible obviously.

Yes, that’s a really great script, let’s see what we can do to improve it further

What do I have to look out for? There are already quite a lot of entries when I execute β€œcurl localhost:xxxxx/mon/ps”

Yes, unfortunately you are right. I would like the used space filewalker to be faster, ok I could deactivate the lazy one but then I would lose more races again. But the biggest problem is that the deleted ttl data is not updated, unfortunately only one of my nodes is on version 108, is the rollout paused? when is version 109 planned?

then i am afraid that 5. currently happens very often with many if you have no other idea why else no filters come.

I have attached all gc-filewalker entries of a node where you can see how rarely they occur with this node. by the way, it doesn’t look much different with the others

Summary
2024-06-25T09:03:23+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-06-25T09:03:23+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-06-25T09:03:23+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "createdBefore": "2024-06-19T17:59:59Z", "bloomFilterSize": 5474380, "Process": "storagenode"}
2024-06-26T01:56:58+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-06-29T04:30:57+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-29T04:30:57+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-06-29T04:30:57+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "bloomFilterSize": 249086, "Process": "storagenode", "createdBefore": "2024-06-25T17:59:59Z"}
2024-06-29T04:40:24+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-29T16:04:55+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-29T16:04:55+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-06-29T16:04:55+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "createdBefore": "2024-06-25T17:59:59Z", "bloomFilterSize": 1016054}
2024-06-29T16:32:01+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T07:07:03+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-06T07:07:04+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-07-06T07:07:04+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "createdBefore": "2024-06-29T17:59:59Z", "bloomFilterSize": 6050262}
2024-07-06T09:21:55+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T09:21:55+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-07-06T09:21:55+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "createdBefore": "2024-07-02T17:59:59Z", "bloomFilterSize": 963858, "Process": "storagenode"}
2024-07-06T10:13:43+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T12:43:44+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-09T17:55:15+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-09T17:55:15+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-07-09T17:55:15+02:00       INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-06-30T14:28:54Z", "bloomFilterSize": 17000003}
2024-07-12T14:52:10+02:00       INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}

The schedule right now seems a once per week (if everything goes well, no OOMs, no kills of the sender, no stuck backup restore, so on and so forth). We increased the frequency for SLC to see how is it going. If it would be ok, I suppose it would be extended to the remained satellites.
However, my nodes managed to receive a more frequent BF for EU1 too. Here I do not have an information why.

This could mean that the reason is

or, as in my case for AP1, there is no garbage (somehow).

I see that also and it seems weird that there are no deletions on that satellite.
Normally this can’t be the case.

yes, i think that’s the reason too. I’ll keep my fingers crossed that you can find the reason for the crashes

1 Like

However, the reported space from the AP1 satellite and the used space on the disk are close to each other and they have only a little difference (it could changed, while calculated).
So, seems they do not delete anything from what are my nodes stores :person_shrugging:, thus no garbage.

4 more BF for SLC are arrived since No bloom filters from AP1, US1 and EU1 - #3 by Alexey

2024-07-11T11:00:22Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cacchePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 765061, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-11T11:18:24Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 48390, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1344260, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "18m1.2935714s", "Retain Status": "enabled"}
2024-07-13T16:05:09Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-07T17:59:59Z", "Filter Size": 763090, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T16:50:21Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 57986, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1414433, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "45m11.2315435s", "Retain Status": "enabled"}
2024-07-15T04:54:44Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-09T17:59:59Z", "Filter Size": 758805, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-15T05:55:56Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 10720, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1553849, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h1m11.6883003s", "Retain Status": "enabled"}
2024-07-17T02:47:11Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-11T17:59:59Z", "Filter Size": 784602, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-17T02:52:35Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 5372, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1542011, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "5m23.2425607s", "Retain Status": "enabled"}

EU1 is arrived:

2024-07-18T04:36:34Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-14T17:59:59Z", "Filter Size": 99609, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-18T04:51:29Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 23594, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 213336, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "14m55.3490008s", "Retain Status": "enabled"}

also one for SLC:

2024-07-19T08:21:49Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-13T17:59:59Z", "Filter Size": 887793, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-19T08:58:00Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 46486, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1712821, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "36m10.6503071s", "Retain Status": "enabled"}

And the AP1 filter is arrived too.

2024-07-19T11:02:03Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-15T17:59:54Z", "Filter Size": 26833, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-19T11:05:51Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 3461, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 50037, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "3m47.9899146s", "Retain Status": "enabled"}