Disk usage discrepancy?

especially if the filter comes more often then the processing should be faster because not so many files fall into the trash or not?

moving to trash is not big deal, go all files is more problematic.

I think it’s a mixture of both, reading in itself is faster than writing. Perhaps a database would also make sense in which the exact metadata and locations of the files are stored, so that the bloom filter with the database, which is then also on an ssd in the best case, recognizes the files that need to be deleted and then deletes or moves them in a targeted manner. so then only the “move” load is left on the hdd and reading is limited to the database in which the metadata is stored.

a) from what I tangentially understand from some of the storjling’s talk, the creation of bloom filters by the satellites is quite a beefy, long running task and is prone to failure.

b) retain / garbage collection filters can take a long time on big nodes. I had a 7TB fragmented node already running a used space filewalker and the GC filter took, I think, two days.

3 Likes

Maybe some of that work can be shifted to Valdi in the future?
Maybe that’s a chance to get this job done cheaper and faster.

1 Like

My nodes received the BF from other satellites a few days ago, however one of the node is processing two previous ones (they were sent with 3 days interval).

2024-07-06T03:15:09Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 6437842, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-06T07:22:00Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 884932, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-07T01:19:56Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 565382, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2107612, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "17h57m55.4957687s", "Retain Status": "enabled"}
2024-07-07T02:44:16Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 798418, "Failed to delete": 1, "Pieces failed to read": 0, "Pieces count": 12015492, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "23h29m6.3105639s", "Retain Status": "enabled"}
2024-07-09T19:15:29Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 6496965, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T04:22:55Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 6491450, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T19:51:58Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 1109366, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2527165, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "15h27m37.8868166s", "Retain Status": "enabled"}

The one satellite is missing though.

The other one seems received 4 BF from SLC

2024-07-05T20:51:20Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 950195, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-05T22:33:44Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 150074, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1805996, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "1h42m23.8533004s", "Retain Status": "enabled"}
2024-07-06T07:54:28Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 95294, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T08:29:26Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 56186, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 222763, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "34m58.3329707s", "Retain Status": "enabled"}
2024-07-09T10:06:19Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 766855, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-09T13:40:35Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 439922, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1786751, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "3h34m16.3620676s", "Retain Status": "enabled"}
2024-07-11T11:00:22Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 765061, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-11T11:18:24Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 48390, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1344260, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "18m1.2935714s", "Retain Status": "enabled"}
2024-07-13T16:05:09Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-07T17:59:59Z", "Filter Size": 763090, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T16:50:21Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 57986, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1414433, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "45m11.2315435s", "Retain Status": "enabled"}
2024-07-15T04:54:44Z    INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-09T17:59:59Z", "Filter Size": 758805, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-15T05:55:56Z    INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 10720, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1553849, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h1m11.6883003s", "Retain Status": "enabled"}

This is what the filters for my 4 largest nodes look like. Saltelake has received a filter every 2 days for one node. for the others, it seems to have been about 3 weeks since one arrived, which is too few filters

node01
2024-06-25T09:03:23+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-19T17:59:59Z", "Filter Size": 5474380, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-06-26T01:57:01+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 3482732, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 13750356, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "16h53m37.915131279s", "Retain Status": "enabled"}
2024-06-29T04:30:57+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 249086, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-29T04:40:24+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 33309, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 456551, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "9m26.870990731s", "Retain Status": "enabled"}
2024-06-29T16:04:55+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 1016054, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-29T16:32:01+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 86337, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 1849952, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "27m6.194710911s", "Retain Status": "enabled"}
2024-07-06T07:07:03+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 6050262, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-06T09:21:55+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 963858, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T10:13:44+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 335742, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2018421, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "51m49.17870507s", "Retain Status": "enabled"}
2024-07-06T12:43:45+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 1502707, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 12908980, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "5h36m41.966570016s", "Retain Status": "enabled"}
2024-07-09T17:55:15+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 17000003, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-12T14:52:17+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 9370936, "Failed to delete": 20, "Pieces failed to read": 0, "Pieces count": 50027176, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "68h57m2.508142128s", "Retain Status": "enabled"}

node02
2024-06-25T19:03:47+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-21T17:59:59Z", "Filter Size": 1396553, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-25T19:38:32+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 56427, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2512568, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "34m45.005258274s", "Retain Status": "enabled"}
2024-06-29T06:39:31+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 311072, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-29T07:05:07+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 41791, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 571590, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "25m35.981090948s", "Retain Status": "enabled"}
2024-06-29T16:40:57+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 1487911, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-29T18:26:05+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 111038, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2713176, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "1h45m7.100155299s", "Retain Status": "enabled"}
2024-07-06T01:49:51+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 10934356, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-06T11:35:48+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 1441594, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T13:56:44+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 508086, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 3059243, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "2h20m56.053047967s", "Retain Status": "enabled"}
2024-07-06T17:44:33+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 3544428, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 24299318, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "15h54m41.932209889s", "Retain Status": "enabled"}
2024-07-09T17:42:01+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 17000003, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T09:22:06+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 7999992, "Failed to delete": 28, "Pieces failed to read": 0, "Pieces count": 51966606, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "87h40m4.982930533s", "Retain Status": "enabled"}
2024-07-15T08:00:00+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-09T17:59:59Z", "Filter Size": 17000003, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}

node03
2024-06-25T12:49:55+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-19T17:59:59Z", "Filter Size": 10068078, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-06-25T19:00:31+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-21T17:59:59Z", "Filter Size": 1185115, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-25T20:30:06+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 71369, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2144699, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "1h29m34.781844171s", "Retain Status": "enabled"}
2024-06-26T15:13:09+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 1847980, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 20447643, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "26h23m14.158630118s", "Retain Status": "enabled"}
2024-06-29T04:15:04+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 278525, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-06-29T04:37:19+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 48171, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 523707, "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Duration": "22m14.63316619s", "Retain Status": "enabled"}
2024-06-29T09:03:40+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-25T17:59:59Z", "Filter Size": 1222065, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-06-29T10:41:32+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 97894, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2194642, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "1h37m51.570757077s", "Retain Status": "enabled"}
2024-07-06T00:08:09+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 10362675, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-06T06:55:44+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 1270338, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T08:15:58+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 177032, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2351634, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "1h20m14.658222737s", "Retain Status": "enabled"}
2024-07-06T13:59:28+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 2329577, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 21030935, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "13h51m19.08486052s", "Retain Status": "enabled"}
2024-07-09T20:22:15+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 17000003, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-12T06:49:13+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 7244248, "Failed to delete": 8, "Pieces failed to read": 0, "Pieces count": 68638376, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "58h26m58.173031139s", "Retain Status": "enabled"}

node04
2024-07-05T23:23:41+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-29T17:59:59Z", "Filter Size": 73454, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-05T23:26:38+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 0, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 750209, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "2m57.276738154s", "Retain Status": "enabled"}
2024-07-06T10:59:52+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-02T17:59:59Z", "Filter Size": 37995, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-06T11:00:25+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 4423, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 96296, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "33.834430285s", "Retain Status": "enabled"}
2024-07-09T16:58:22+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 3428175, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-09T17:23:04+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 12790, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 6617005, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "24m42.691743968s", "Retain Status": "enabled"}
2024-07-11T11:37:00+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-06-30T14:28:54Z", "Filter Size": 3428375, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-11T12:02:18+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 1292, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 8134952, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "25m17.404483818s", "Retain Status": "enabled"}
2024-07-13T13:09:01+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-07T17:59:59Z", "Filter Size": 3713869, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T14:39:47+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 549425, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 10159372, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "1h30m46.466894466s", "Retain Status": "enabled"}
2024-07-15T18:17:27+02:00       INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-07-09T17:59:59Z", "Filter Size": 4801421, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-15T18:56:13+02:00       INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 87753, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 10529282, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "38m46.160484464s", "Retain Status": "enabled"}

I believe it also depends on the parameter

      --retain.concurrency int                                   how many concurrent retain requests can be processed at the same time. (default 5)

So, it could process 5 in parallel by default. And likely some of your nodes have it as 1, I guess.

if this is the case, then the standard has been changed because i have never adjusted this value. I’ll have a look in the config when I get a chance, thanks.

1 Like

You may also just check the default values by executing the command

docker exec -it storagenode ./storagenode setup --help | grep retain

or in the Windows PowerShell

& "$env:ProgramFiles\Storj\Storage Node\storagenode.exe" setup --help | sls retain

In 1.108 it was changed to 1

1 Like

Then while the previous retain is not finished, the next one will not be started.

Hey bud, :frowning: still having this issue

tried the commands below after few days it crashed and i re-restarted Storjnode few days back and it has cleared the trash but still showing wrong stats on storj anything else can you please advise any command just to run the file walker so it can actually check

image

cat “f:\storagenode*.log” -Wait | sls “used-space-filewalker” | sls “started|finished”

garbage collector filewalker (gc-filewalker) - collect the garbage (deleted pieces)

cat “f:\storagenode*.log” -Wait | sls “gc-filewalker” | sls “started|finished”

cat “f:\storagenode*.log” -Wait | sls “retain”

cat “f:\storagenode*.log” -Wait | sls “collector”

cat “f:\storagenode*.log” -Wait | sls “piece:trash”

root@storj-chia01:~# docker exec -it ccd04fb461e3 ./storagenode setup --help | grep retain
      --retain.cache-path string                                 path to the cache directory for retain requests. (default "/root/.local/share/storj/storagenode/retain")
      --retain.concurrency int                                   how many concurrent retain requests can be processed at the same time. (default 5)
      --retain.max-time-skew duration                            allows for small differences in the satellite and storagenode clocks (default 72h0m0s)
      --retain.status storj.Status                               allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug) (default enabled)
      --storage2.retain-time-buffer duration                     allows for small differences in the satellite and storagenode clocks (default 48h0m0s)

and here is the excerpt from the config, all are commented out and thus default should apply. in docker the parameter is not changed either

Summary
root@storj-chia01:~# cat /mnt/storj/node001_2021.10/config.yaml | grep retain
# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5
# retain.max-time-skew: 72h0m0s
# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled
# storage2.retain-time-buffer: 48h0m0s

root@storj-chia01:~# cat /mnt/storj/node002_2022.04/config.yaml | grep retain
# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5
# retain.max-time-skew: 72h0m0s
# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled
# storage2.retain-time-buffer: 48h0m0s

root@storj-chia01:~# cat /mnt/storj/node003_2023.12/config.yaml | grep retain
# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5
# retain.max-time-skew: 72h0m0s
# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled
# storage2.retain-time-buffer: 48h0m0s

root@storj-chia01:~# cat /mnt/storj/node004_2024.06/config.yaml | grep retain
# path to the cache directory for retain requests.
# retain.cache-path: config/retain
# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5
# retain.max-time-skew: 72h0m0s
# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled
# storage2.retain-time-buffer: 48h0m0s

root@storj-chia01:~# cat /mnt/storj/node005_2024.07/config.yaml | grep retain
# path to the cache directory for retain requests.
# retain.cache-path: config/retain
# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 5
# retain.max-time-skew: 72h0m0s
# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled
# storage2.retain-time-buffer: 48h0m0s

@Alexey I want to move my database to nvme I only find how to do so on docker but Im running windows maybe you seen a forum post how to do it?
I could just copy content from that disk to nvme and change dir for database but which files do I copy and which one I do not?

1 Like

Lol. I just discovered one of my nodes still processing a Bloom filter from 30th of June:

ls /trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/2024-06-30/v5 | wc -l
33837

Please try to remove -Wait from the cat command to see all errors and add | select -last 10 at the end of each command.
You need to have a successfully finished used-space-filewalker for each satellite.