Hey, I want to enable badger but cant find this pieces.file-stat-cache: in my yaml file do I just add it?
Yes, and donât forget
pieces.enable-lazy-filewalker: false
and restart the node.
You also need either to restart it twice (after used-space-filewalker will finish the scan for all trusted satellites), or let it fill the cache with other filewalkers, then almost each filewalker would be faster in several times.
How does one get this data?
I have 1 node on NVME, it took 2-3 min only for 1.5 TB data.
Check the logs for retain
.
Here comparison of same thing, but than using ZFS with meta on SSD:
STORJ12
2024-08-12T06:39:24Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T06:39:25Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T06:39:25Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T06:39:25Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "bloomFilterSize": 1847427, "Process": "storagenode", "createdBefore": "2024-08-06T17:59:59Z"}
2024-08-12T06:40:08Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "piecesCount": 2604091, "trashPiecesCount": 5456, "piecesTrashed": 5456, "piecesSkippedCount": 0}
2024-08-12T06:40:08Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 2.6M in 44s, trashed 5.5k
STORJ20
2024-08-12T09:38:05Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T09:38:05Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T09:38:05Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T09:38:05Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "bloomFilterSize": 1792288, "Process": "storagenode", "createdBefore": "2024-08-06T17:59:59Z"}
2024-08-12T09:38:53Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "trashPiecesCount": 13554, "piecesTrashed": 13554, "piecesSkippedCount": 0, "Process": "storagenode", "piecesCount": 2706347}
2024-08-12T09:38:53Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 2.7M in 48s, trashed 13.5k
STORJ3 # only one still using ext4, lazy +
2024-08-11T15:52:52Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-11T15:52:52Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-11T15:52:54Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-11T15:52:54Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-04T17:59:59Z", "bloomFilterSize": 156143}
2024-08-11T16:11:58Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "piecesCount": 234338, "trashPiecesCount": 19040, "piecesTrashed": 19040, "piecesSkippedCount": 0}
2024-08-11T16:11:59Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
#0.23M in 19m6s, trashing 19k
STORJ7
2024-08-12T06:29:00Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T06:29:00Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T06:29:01Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T06:29:01Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-06T17:59:59Z", "bloomFilterSize": 2048585}
2024-08-12T06:31:48Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "piecesTrashed": 5080, "piecesSkippedCount": 0, "Process": "storagenode", "piecesCount": 2296377, "trashPiecesCount": 5080}
2024-08-12T06:31:48Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 2.3M in 2m48s, trashing 5k
STORJ11
2024-08-12T02:10:21Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T02:10:21Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T02:10:21Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T02:10:21Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-06T17:59:59Z", "bloomFilterSize": 654238}
2024-08-12T02:10:27Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "piecesTrashed": 469, "piecesSkippedCount": 0, "Process": "storagenode", "piecesCount": 374739, "trashPiecesCount": 469}
2024-08-12T02:10:27Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 0.37M in 6s, trashing 469
STORJ16
2024-08-12T02:26:36Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T02:26:36Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T02:26:36Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "createdBefore": "2024-08-06T17:59:59Z", "bloomFilterSize": 251871, "Process": "storagenode"}
2024-08-12T02:26:40Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "piecesCount": 327556, "trashPiecesCount": 226, "piecesTrashed": 226, "piecesSkippedCount": 0}
2024-08-12T02:26:40Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 0.33M in 4s, trashing 226
STORJ17
2024-08-12T10:42:19Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T10:42:19Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T10:42:19Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T10:42:19Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "createdBefore": "2024-08-06T17:59:59Z", "bloomFilterSize": 240654, "Process": "storagenode"}
2024-08-12T10:42:23Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "trashPiecesCount": 364, "piecesTrashed": 364, "piecesSkippedCount": 0, "Process": "storagenode", "piecesCount": 161390}
2024-08-12T10:42:23Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 0.16M in 4s, trashing 364.
STORJ22
2024-08-12T12:06:08Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T12:06:08Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T12:06:08Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T12:06:08Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-06T17:59:59Z", "bloomFilterSize": 2114666}
2024-08-12T12:06:19Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "piecesCount": 752278, "trashPiecesCount": 1833, "piecesTrashed": 1833, "piecesSkippedCount": 0}
2024-08-12T12:06:19Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 0.75M in 11s, trashing 1.8k
STORJ9
2024-08-12T11:44:49Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T11:44:49Z INFO lazyfilewalker.gc-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-12T11:44:49Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}2024-08-12T11:44:49Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "createdBefore": "2024-08-06T17:59:59Z", "bloomFilterSize": 1583738}
2024-08-12T11:45:16Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "piecesCount": 2189290, "trashPiecesCount": 6066, "piecesTrashed": 6066, "piecesSkippedCount": 0}
2024-08-12T11:45:16Z INFO lazyfilewalker.gc-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
# 2.2M in 27s, trashing 6k
So, yeah badger (on XFS) is a real improvement. But still taking about a factor 8x ZFS+meta on SSD. Although apparently already about 7x quicker than just ext4 on SMR.
Itâs better to search for
grep retain /mnt/x/storagenode2/storagenode.log | grep -E "Prepar|Move"
it would also print the size and duration.
Well, it seems badger cache has certain minimum system requirements.
Yesterday I moved my 4 nodes from my old HP Gen6 N40L Microserver to a newer PC.
The Microserver has a dual core AMD Turion 1,5GHz CPU, 6GB RAM, Windows 8.1, op system and all the databases were on a SATA SSD. I had 4 nodes in this PC with 10-4-4-4TB drives/nodes.
The 4 TB nodes were running OK, but the 10TB was allways struggling with the filewalkers. I enabled bagder cache as soon as 1.109.2 was out. Unfortunatelly this node never finished used space FW in the past 3 weeks. However in the filestat folder it has 4,3GB of data in 189 different files. The drive was 100% busy all the time. It was busy with the BFs and trash deletion all the time, but these were done properly. Except the filewalkersâŚ
Yesterday I moved all 4 nodes to a newer PC (i5-8400 6core, 16GB RAM, Win 11, op system and databases on NVME SSD).
The same 10TB node, on the same HDD:
2024-08-12T18:19:38+02:00 | INFO | pieces | used-space-filewalker started | {Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE} |
---|---|---|---|---|
2024-08-12T21:10:56+02:00 | INFO | pieces | used-space-filewalker completed | {Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Lazy File Walker: false, Total Pieces Size: 3785795759616, Total Pieces Content Size: 3773586357248} |
2024-08-12T21:10:56+02:00 | INFO | pieces | used-space-filewalker started | {Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6} |
2024-08-12T21:12:31+02:00 | INFO | pieces | used-space-filewalker completed | {Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Lazy File Walker: false, Total Pieces Size: 85692045312, Total Pieces Content Size: 85529192448} |
2024-08-12T21:12:31+02:00 | INFO | pieces | used-space-filewalker started | {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S} |
2024-08-12T21:37:29+02:00 | INFO | pieces | used-space-filewalker completed | {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Lazy File Walker: false, Total Pieces Size: 2086117863972, Total Pieces Content Size: 2080435785764} |
2024-08-12T21:37:29+02:00 | INFO | pieces | used-space-filewalker started | {Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs} |
2024-08-13T02:30:21+02:00 | INFO | pieces | used-space-filewalker completed | {Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Lazy File Walker: false, Total Pieces Size: 627844193280, Total Pieces Content Size: 627080414720} |
Itâs likely also have a different SATA controller. I do not think that badger cache is CPU constrained.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d8aab63f299d storagenode2 16.73% 554.1MiB / 24.81GiB 2.18% 168GB / 40.1GB 0B / 0B 100
3d20fef76e67 storagenode5 4.63% 132.8MiB / 24.81GiB 0.52% 35.7GB / 17.2GB 0B / 0B 60
storagenode2 is currently performing the used-space-filewalker, trash and handling uploads. There are 8 cores CPU (i7 2019). It has badger cache enabled.
Without badger, lazy off:
2024-08-11T04:17:34Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-17T20:45:39Z INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": false, "Total Pieces Size": 2582215761152, "Total Pieces Content Size": 2576394000640}
with badger cache:
2024-08-17T23:02:50Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-17T23:46:56Z INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": false, "Total Pieces Size": 2599137402368, "Total Pieces Content Size": 2593355542528}
6d 16h â 44m
This node continue to run the used space filewalker (not all completed), retain and trash cleanup
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d8aab63f299d storagenode2 13.85% 1.01GiB / 24.81GiB 4.07% 17.3GB / 2.42GB 0B / 0B 75
for comparison, the other node, which finished all filewalkers and just handles uploads/downloads (no badger cache, lazy on)
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
3d20fef76e67 storagenode5 0.37% 141.5MiB / 24.81GiB 0.56% 29.4GB / 4.09GB 0B / 0B 79
In a well cached system (zfs+special dev or l2arc or lvmcache) would you recommend enabling the Badger cache, or could it be counterproductive?"
I have lvmcache and Iâm using it.
Really speeds up the filewalk
Likely it would help too. But you may compare, itâs a reversible change.
Here are some results of some bloom filters with badger cache
2024-08-17T13:44:19Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-11T15:49:50Z", "Filter Size": 9290165, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-08-17T14:15:40Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 10099, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 13670290, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "31m20.625080226s", "Retain Status": "enabled"}
2024-08-18T01:39:53Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-12T17:59:59Z", "Filter Size": 1027031, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-08-18T01:49:45Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 105362, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 2380838, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "9m51.685249271s", "Retain Status": "enabled"}
2024-08-18T06:35:07Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-08-13T17:59:59Z", "Filter Size": 174583, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-08-18T06:37:21Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 68974, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 396130, "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Duration": "2m13.884476534s", "Retain Status": "enabled"}
The size of the bloom filter vs time
9290165 31m20 Deleted pieces: 10099
1027031 9m51 Deleted pieces: 105362
174583 2m13 Deleted pieces: 68974
P.S. Why is saltlake so slow and doesnât remove much?
Edit: Answering my own questionâŚbecause it has 13M pieces vs 2M and 0.4M for the other two
because it likely has new pieces every time, I think.
I would post my results for the retain
, when my node pass all satellites in the used-space-filewalker with a badger cache.
So far only Saltlake and EU1 have passed it (my node is restarted due to a new version, when itâs started to calculate US1).
So sorry if I understand correctly the badger uses more cpu and more RAM than having none?
Iâm suspecting that. I canât remember a time in many years when this node used more than 500MB.
Badger enabled first run on AP1:
2024-08-17T05:11:01Z INFO pieces used-space-filewalker started
2024-08-17T09:59:49Z INFO pieces used-space-filewalker completed
Badger enabled second run on AP1:
2024-08-18T05:35:33Z INFO pieces used-space-filewalker started
2024-08-18T05:52:29Z INFO pieces used-space-filewalker completed
lazy off, badger off
2024-08-11T02:41:59Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 221195, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 11760224, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "25h21m17.6144463s", "Retain Status": "enabled"}
...
2024-08-14T10:47:40Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 36595, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 11410084, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "6h15m37.901733s", "Retain Status": "enabled"}
lazy off, badger enabled
2024-08-19T06:50:30Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 348587, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 11999854,"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "22h3m31.7099403s", "Retain Status": "enabled"}
...
2024-08-20T08:06:07Z INFO retain Moved pieces to trash during retain {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 52261, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 11328706, "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Duration": "5h10m20.0016903s", "Retain Status": "enabled"}
Not a significant change, 6h15m37.901733s â 5h10m20.0016903s (SLC), 25h21m17.6144463s â 22h3m31.7099403s (US1)