Has anyone received a bloom filter in the past days from EU1 satellite?
The last one i can remember was from 2024-08-26 and is yet gone from trash folder.
AP1 sends every 3 days and US1 every 2 days, but EU1 does nothing.
Has anyone received a bloom filter in the past days from EU1 satellite?
The last one i can remember was from 2024-08-26 and is yet gone from trash folder.
AP1 sends every 3 days and US1 every 2 days, but EU1 does nothing.
EU1 trash folder from 28th is gone. Log does show last BF from 28th Aug.
$ sudo tree -L 2 /mnt/x/storagenode2/storage/trash/
/mnt/x/storagenode2/storage/trash/
βββ abforhuxbzyd35blusvrifvdwmfx4hmocsva4vmpp3rgqaaaaaaa
β βββ 2024-04-20
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-26
β βββ 2024-08-29
β βββ 2024-09-02
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-09-04
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
βββ 2024-09-02
So, it has received a BF 2024-09-02.
Accordingly logs:
$ grep retain /mnt/x/storagenode2/storagenode.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
2024-09-02T19:22:55Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-22T17:59:59Z", "Filter Size": 931834, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-02T22:48:34Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 14258, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1806966, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "3h25m38.5395111s", "Retain Status": "enabled"}
Is it possible that your node processing a BF from another satellite right now? It will process them one by one.
Unfortunately, in my case it also looks as if no filters have arrived at 10 nodes.
Node 3 is still busy with the trash cleanup but the others are βidleβ
/mnt/storj/node001_2021.10/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
β βββ 2024-09-06
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-29
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-04
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
20 directories
/mnt/storj/node002_2022.04/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-29
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-29
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
20 directories
/mnt/storj/node003_2023.12/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-25
β βββ 2024-08-28
β βββ 2024-08-29
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-26
β βββ 2024-08-27
β βββ 2024-08-29
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-06
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-26
β βββ 2024-08-28
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
βββ 2024-08-26
βββ 2024-08-27
28 directories
/mnt/storj/node004_2024.06/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-09-02
β βββ 2024-09-04
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-04
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
18 directories
/mnt/storj/node005_2024.07/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-04
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
18 directories
/mnt/storj/node006_2024.07/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-29
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
β βββ 2024-09-06
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-29
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-04
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
21 directories
/mnt/storj/node007_2024.07/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-29
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-04
β βββ 2024-09-05
β βββ 2024-09-06
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-29
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-06
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-04
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
22 directories
/mnt/storj/node008_2024.07/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-04
β βββ 2024-09-05
β βββ 2024-09-06
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
19 directories
/mnt/storj/node009_2024.07/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-29
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-04
β βββ 2024-09-05
β βββ 2024-09-06
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-30
β βββ 2024-09-02
β βββ 2024-09-04
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
21 directories
/mnt/storj/node010_2024.07/storage/trash/
βββ pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
β βββ 2024-08-29
β βββ 2024-08-30
β βββ 2024-08-31
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-04
β βββ 2024-09-06
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-29
β βββ 2024-09-02
β βββ 2024-09-03
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-02
β βββ 2024-09-05
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
21 directories
root@storj-chia01:~#
Could you please execute the similar command:
and then
grep retain /mnt/x/storagenode2/storagenode.log | grep -E "Prepar|Move"
of course, using your path or docker logs
Also
ls -l /mnt/x/storagenode2/retain/
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node002_2022.04/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node001_2021.10/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node002_2022.04/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node003_2023.12/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node004_2024.06/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node005_2024.07/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node006_2024.07/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node007_2024.07/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node008_2024.07/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node009_2024.07/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node010_2024.07/logs/node.log | grep -E "Prepar|Move" | grep 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
cubefan@storj-chia01:~$ cat /mnt/storj/ssd/node001_2021.10/logs/node.log | grep -E "Prepar|Move"
2024-09-02T06:12:04+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 11496584, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-02T06:30:00+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 322, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 138222, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "17m55.596418522s", "Retain Status": "enabled"}
2024-09-02T15:41:08+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 11496584, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-02T15:58:40+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 198, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 129606, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "17m31.939550563s", "Retain Status": "enabled"}
2024-09-03T06:36:58+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-28T17:59:59Z", "Filter Size": 7517711, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-09-03T07:49:10+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 286933, "Failed to delete": 2, "Pieces failed to read": 0, "Pieces count": 12980142, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h12m12.00217434s", "Retain Status": "enabled"}
2024-09-03T09:01:57+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 8765593, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-03T10:37:45+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 0, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 116304, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h35m47.8676035s", "Retain Status": "enabled"}
2024-09-03T20:28:54+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 8765593, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-03T22:09:36+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 313, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 106051, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h40m42.266495421s", "Retain Status": "enabled"}
2024-09-03T23:19:16+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-30T17:59:59Z", "Filter Size": 197291, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-03T23:21:32+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 3192, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 342548, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "2m16.125789636s", "Retain Status": "enabled"}
2024-09-04T00:54:13+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 5011391, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-04T02:29:40+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 165, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 103857, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h35m26.508260355s", "Retain Status": "enabled"}
2024-09-04T13:14:53+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 5011391, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-04T14:51:00+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 0, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 98924, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h36m6.344949391s", "Retain Status": "enabled"}
2024-09-05T00:24:37+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-30T17:59:59Z", "Filter Size": 7453231, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-09-05T00:36:35+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 127443, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 12824480, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "11m57.789394043s", "Retain Status": "enabled"}
2024-09-05T01:45:25+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 5011391, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-05T02:03:24+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 0, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 95117, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "17m58.808633116s", "Retain Status": "enabled"}
2024-09-05T11:47:18+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T03:49:29Z", "Filter Size": 3289478, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-05T12:04:55+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 315, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 91687, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "17m37.095582662s", "Retain Status": "enabled"}
2024-09-05T23:34:45+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-09-01T17:59:59Z", "Filter Size": 198738, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-05T23:34:54+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 6309, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 347191, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "8.531663208s", "Retain Status": "enabled"}
2024-09-06T04:48:51+02:00 INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-09-01T01:57:25Z", "Filter Size": 54497, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-06T05:06:55+02:00 INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 140, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 82791, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "18m4.411751676s", "Retain Status": "enabled"}
the others look similar but there are too many lines to post them all
only in node 3 there is still a file pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaaaa-1725415045536926000.pb
root@92e8b3fc1562:/app/config# cd retain/
root@92e8b3fc1562:/app/config/retain# ls
root@92e8b3fc1562:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node02 bash
root@f4cb9a854a59:/app# cd config/retain/
root@f4cb9a854a59:/app/config/retain# ls
root@f4cb9a854a59:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node03 bash
root@d9e99e216bec:/app# cd config/retain/
root@d9e99e216bec:/app/config/retain# ls
pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa-1725415045536926000.pb
root@d9e99e216bec:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node04 bash
root@ff04a1c0baa5:/app# cd config/retain/
root@ff04a1c0baa5:/app/config/retain# ls
root@ff04a1c0baa5:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node05 bash
root@68dcf7aed515:/app# cd config/retain/
root@68dcf7aed515:/app/config/retain# ls
root@68dcf7aed515:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node06 bash
root@56dee99dbfd5:/app# cd config/retain/
root@56dee99dbfd5:/app/config/retain# ls
root@56dee99dbfd5:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node07 bash
root@aff5089454b7:/app# cd config/retain/
root@aff5089454b7:/app/config/retain# ls
root@aff5089454b7:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node08 bash
root@d56b6473f535:/app# cd config/retain/
root@d56b6473f535:/app/config/retain# ls
root@d56b6473f535:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node09 bash
root@db455dd28df0:/app# cd config/retain/
root@db455dd28df0:/app/config/retain# ls
root@db455dd28df0:/app/config/retain# exit
exit
root@storj-chia01:~# docker exec -it storj-node10 bash
root@8158210504de:/app# cd config/retain/
root@8158210504de:/app/config/retain# ls
root@8158210504de:/app/config/retain# exit
exit
Interesting, my biggest node still processing them
$ ls -l /mnt/x/storagenode2/retain/
total 6872
-rw-r--r-- 1 root root 56321 Sep 6 00:48 pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa-1725415045536926000.pb
-rw-r--r-- 1 root root 248794 Sep 5 23:49 qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa-1725472799875480000.pb
-rw-r--r-- 1 root root 6727079 Sep 5 12:18 ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa-1725299999997633000.pb
Node 3 is my biggest node and recently got the update so the used space filewalker is still running, here are the logs from yesterday until now. A Bloom filter is still being processed
2024-09-05T23:51:56+02:00 INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "piecesCount": 77450, "Total Pieces To Trash": 271, "Trashed Pieces": 271, "Pieces Skipped": 0}
2024-09-05T23:52:29+02:00 INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-06T03:48:34+02:00 INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-06T03:48:34+02:00 INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-06T03:48:34+02:00 INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-09-06T03:48:34+02:00 INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "createdBefore": "2024-09-01T17:59:59Z", "bloomFilterSize": 211576}
2024-09-06T03:51:09+02:00 INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "piecesCount": 369002, "Total Pieces To Trash": 6003, "Trashed Pieces": 6003, "Pieces Skipped": 0}
2024-09-06T03:51:09+02:00 INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-09-06T07:34:48+02:00 INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-06T07:34:48+02:00 INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-09-06T07:34:48+02:00 INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-09-06T07:34:48+02:00 INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-09-01T01:57:25Z", "bloomFilterSize": 48364}
2024-09-06T08:02:56+02:00 INFO lazyfilewalker.used-space-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-09-06T08:02:56+02:00 INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "Total Pieces Size": 4242669556248, "Total Pieces Content Size": 4233451711000}
2024-09-06T08:02:56+02:00 INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-06T08:02:56+02:00 INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-06T08:02:56+02:00 INFO lazyfilewalker.used-space-filewalker subprocess started {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-06T08:02:56+02:00 INFO lazyfilewalker.used-space-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
The retain folder is empty.
Storage used by satellites (No Saltlake on the node):
AP1: 178GB
EU1: 1.437TB
US1: 5.335TB
The trash folder:
βNode Locationβ/storage/trash/
βββ qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
β βββ 2024-08-29
β βββ 2024-09-01
β βββ 2024-09-04
β βββ 2024-09-05
βββ ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
β βββ 2024-08-30
β βββ 2024-09-01
β βββ 2024-09-03
β βββ 2024-09-05
βββ v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
Some stats about the trash folder using:
du -s --si --apparent-size βNode Locationβ/storage/trash/βSat folderβ
find βNode Locationβ/storage/trash/βSat folderβ/ -type f | wc -l
Trash: AP1
12G /qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/
Files:
43336Trash: EU1
4,1k /v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/
Files:
0Trash: US1
230G /ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/
Files:
1714086
The logs are rotated every 14 days, so they go back up to 2024-08-24
EU1:
|node.log.11:2024-08-26T21:19:48Z|INFO|retain|Prepared to run a Retain request.|{Process: storagenode, cachePath: config/retain, Created Before: 2024-08-22T17:59:59Z, Filter Size: 1927855, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs}|
|node.log.11:2024-08-26T21:28:25Z|INFO|retain|Moved pieces to trash during retain|{Process: storagenode, cachePath: config/retain, Deleted pieces: 104353, Failed to delete: 0, Pieces failed to read: 0, Pieces count: 3395653, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Duration: 8m37.647282308s, Retain Status: enabled}|
|node.log.12:2024-08-24T22:49:47Z|INFO|retain|Prepared to run a Retain request.|{Process: storagenode, cachePath: config/retain, Created Before: 2024-08-20T17:59:59Z, Filter Size: 1925576, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs}|
|node.log.12:2024-08-24T22:58:00Z|INFO|retain|Moved pieces to trash during retain|{Process: storagenode, cachePath: config/retain, Deleted pieces: 67879, Failed to delete: 0, Pieces failed to read: 0, Pieces count: 3403176, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Duration: 8m12.590182705s, Retain Status: enabled}|
The other satellites:
AP1:
node.log.1:2024-09-05T21:47:00Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-09-01T17:59:59Z", "Filter Size": 364488, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.1:2024-09-05T21:52:00Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 15571, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 644178, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "4m59.745727594s", "Retain Status": "enabled"}
node.log.10:2024-08-27T19:14:40Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-23T17:59:59Z", "Filter Size": 349403, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.10:2024-08-27T19:23:24Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 12227, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 607608, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "8m44.000575773s", "Retain Status": "enabled"}
node.log.12:2024-08-25T21:48:48Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-21T17:59:59Z", "Filter Size": 348674, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.12:2024-08-25T21:51:22Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 14868, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 614132, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "2m33.573072564s", "Retain Status": "enabled"}
node.log.14:2024-08-23T00:44:55Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-18T17:59:59Z", "Filter Size": 349116, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.14:2024-08-23T00:48:46Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 25006, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 626709, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "3m51.637681317s", "Retain Status": "enabled"}
node.log.2:2024-09-04T02:24:08Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-30T17:59:59Z", "Filter Size": 360564, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.2:2024-09-04T02:25:59Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 7061, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 638909, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "1m50.498149264s", "Retain Status": "enabled"}
node.log.4:2024-09-01T22:01:37Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-28T17:59:59Z", "Filter Size": 353233, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.4:2024-09-01T22:03:39Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 14111, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 638284, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "2m1.737823778s", "Retain Status": "enabled"}
node.log.8:2024-08-29T20:04:55Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-25T17:59:59Z", "Filter Size": 350395, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
node.log.8:2024-08-29T20:07:09Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 6593, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 606622, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "2m13.560425143s", "Retain Status": "enabled"}
US1:
node.log.1:2024-09-05T06:29:45Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-30T17:59:59Z", "Filter Size": 18146793, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.1:2024-09-05T08:37:16Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 379912, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31782952, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "2h7m30.793173042s", "Retain Status": "enabled"}
node.log.11:2024-08-26T15:11:27Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-20T17:59:59Z", "Filter Size": 17945127, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.11:2024-08-26T16:18:46Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 388200, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31643562, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h7m18.806154758s", "Retain Status": "enabled"}
node.log.13:2024-08-24T15:19:12Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-18T17:59:59Z", "Filter Size": 17877570, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.13:2024-08-24T18:42:16Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 475552, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31683826, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "3h23m4.21985355s", "Retain Status": "enabled"}
node.log.3:2024-09-03T01:22:11Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-28T17:59:59Z", "Filter Size": 18182225, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.3:2024-09-03T02:53:50Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 549540, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31793965, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h31m39.00878272s", "Retain Status": "enabled"}
node.log.5:2024-09-01T13:43:51Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-26T17:59:59Z", "Filter Size": 18121249, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.5:2024-09-01T14:45:15Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 387658, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31900949, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h1m24.296874475s", "Retain Status": "enabled"}
node.log.7:2024-08-30T01:35:24Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-24T17:59:59Z", "Filter Size": 18029727, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.7:2024-08-30T02:39:09Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 396976, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31710376, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h3m45.614155439s", "Retain Status": "enabled"}
node.log.9:2024-08-28T13:47:23Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-22T17:59:59Z", "Filter Size": 17951613, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
node.log.9:2024-08-28T14:53:16Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 425246, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 31714791, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h5m52.203743891s", "Retain Status": "enabled"}
I got EU1 bloom filter today.
During the night all nodes received their bloom filters from EU1, moved around 1.5TB data into trash.
Sadly more than the ingress on EU1 this month so far, s**t happens.
This is interesting, my nodes are close to Europe, but holds more than 80% for US1, and very little for Saltlake, AP1 and EU1.
Similar to me, the distribution for my nodes:
Location: Germany
Storage total: 79,412TB
AP1: 2,49%, 1,977TB
EU1: 17,97%, 14,274TB
US1: 79,54%, 63,165TB
For me it looks strange, however, I have one of explanations. Take a look on a map how the nodes are distributed. Itβs highly concentrated almost in the central Europe, and, I guess, many customers wants to comply with GDPR.
I also got a filter for all nodes during the night
My nodes are in Germany and I also have more US data. I think this is simply because there are more US customers than EU customers and data.
Also received EU1 BF for my nodes today.
Deleted Failed to Piece
Timestamp Node pieces delete/read count Duration Satellite
--------------------------------------------------------------------------------------------------------------------
2024-08-22T16:23:06+02 server002 195.750 0 / 0 23.011.336 33m23.79s US1
2024-08-24T20:03:48+02 server002 582.883 0 / 0 22.972.975 24m35.72s US1
2024-08-26T07:03:30+02 server002 213.279 0 / 0 22.464.389 22m29.48s US1
2024-08-28T18:54:35+02 server002 226.308 0 / 0 22.498.922 24m5.13s US1
2024-08-30T11:11:23+02 server002 97.907 0 / 0 22.400.163 18m40.21s US1
2024-09-01T09:03:00+02 server002 212.540 0 / 0 22.458.094 23m8.52s US1
2024-09-03T09:39:31+02 server002 161.988 0 / 0 22.270.149 20m57.30s US1
2024-09-05T01:54:42+02 server002 213.078 0 / 0 22.187.598 19m13.68s US1
2024-09-07T04:52:47+02 server002 211.965 0 / 0 21.947.792 19m39.59s US1
---
2024-08-22T07:09:28+02 server002 18.077 0 / 0 2.643.830 5m35.60s EU1
2024-08-24T23:33:12+02 server002 66.472 0 / 0 2.647.534 8m14.30s EU1
2024-08-27T00:38:04+02 server002 53.229 0 / 0 2.603.227 17m30.34s EU1
--> 2024-09-08T01:45:11+02 server002 173.759 0 / 0 2.664.025 10m59.21s EU1
---
2024-08-22T21:00:09+02 server002 13.943 0 / 0 686.693 2m4.58s AP1
2024-08-26T01:22:30+02 server002 6.081 0 / 0 677.412 1m13.56s AP1
2024-08-28T01:18:15+02 server002 4.256 0 / 0 675.327 1m0.39s AP1
2024-08-30T02:24:45+02 server002 2.988 0 / 0 674.380 1m1.95s AP1
2024-09-02T02:05:35+02 server002 8.701 0 / 0 681.794 1m31.60s AP1
2024-09-04T01:26:38+02 server002 3.154 0 / 0 673.840 1m5.77s AP1
2024-09-06T00:07:29+02 server002 5.985 0 / 0 674.252 52.03s AP1
---
2024-08-22T09:23:44+02 server002 15.210 0 / 0 4.831.602 16m43.66s Saltlake
2024-08-24T10:44:27+02 server002 8.473 0 / 0 4.707.968 11m20.96s Saltlake
2024-08-25T07:14:59+02 server002 3.870.170 0 / 0 4.686.604 40m25.29s Saltlake
2024-08-25T21:42:23+02 server002 120 0 / 0 803.455 45.02s Saltlake
2024-08-26T11:38:09+02 server002 0 0 / 0 788.182 39.89s Saltlake
2024-08-27T09:13:09+02 server002 117 0 / 0 768.763 39.32s Saltlake
2024-08-28T01:05:30+02 server002 0 0 / 0 755.259 38.34s Saltlake
2024-08-28T08:36:40+02 server002 202 0 / 0 749.362 5.51s Saltlake
2024-08-28T22:03:42+02 server002 8 0 / 0 742.446 40.87s Saltlake
2024-08-29T18:41:09+02 server002 2 0 / 0 729.993 37.06s Saltlake
2024-08-30T05:25:39+02 server002 7 0 / 0 708.892 37.92s Saltlake
2024-08-30T11:43:12+02 server002 1.916 0 / 0 698.862 54.97s Saltlake
2024-08-30T19:40:01+02 server002 0 0 / 0 682.612 2.79s Saltlake
2024-08-31T12:38:22+02 server002 289 0 / 0 627.562 38.69s Saltlake
2024-08-31T16:07:00+02 server002 1 0 / 0 616.695 2.12s Saltlake
2024-09-01T00:40:29+02 server002 0 0 / 0 586.248 14.04s Saltlake
2024-09-01T11:22:25+02 server002 78 0 / 0 558.569 36.11s Saltlake
2024-09-02T02:08:56+02 server002 125 0 / 0 533.010 34.87s Saltlake
2024-09-02T05:00:47+02 server002 371 0 / 0 524.050 5.06s Saltlake
2024-09-02T21:12:58+02 server002 378 0 / 0 477.358 5.25s Saltlake
2024-09-03T03:29:42+02 server002 23 0 / 0 457.064 20.40s Saltlake
2024-09-03T10:50:27+02 server002 727 0 / 0 431.988 36.98s Saltlake
2024-09-03T14:45:19+02 server002 10 0 / 0 416.895 1.58s Saltlake
2024-09-04T00:45:50+02 server002 27 0 / 0 387.683 1.49s Saltlake
2024-09-04T12:42:33+02 server002 0 0 / 0 386.880 1.46s Saltlake
2024-09-05T00:16:50+02 server002 0 0 / 0 375.358 1.42s Saltlake
2024-09-05T14:45:04+02 server002 2.166 0 / 0 353.795 36.67s Saltlake
2024-09-06T01:17:07+02 server002 2.162 0 / 0 326.319 10.04s Saltlake
2024-09-06T11:37:24+02 server002 740 0 / 0 300.407 1.75s Saltlake
2024-09-06T20:10:04+02 server002 104 0 / 0 281.605 1.22s Saltlake
2024-09-07T02:35:33+02 server002 743 0 / 0 270.136 2.63s Saltlake
2024-09-07T13:20:59+02 server002 46 0 / 0 249.112 27.76s Saltlake
2024-09-07T23:19:30+02 server002 450 0 / 0 239.341 2.04s Saltlake
2024-09-08T08:37:44+02 server002 507 0 / 0 231.311 20.65s Saltlake
Th3Van.dk