Not sure if related to something or I was just lucky, but had one node I was migrating to ext4 the other day. This took a few days with storagenode process being shutdown.
Once done, I rebooted the VM to remove the old LV, updated the amd64 binary to 1.95.1, started it and shortly after it apparently received a bloom filter.
It was running okay moving significant amounts of data to trash, but then the lazy storagenode process was killed because of OOM at roughly 0.73TB of trash.
The other storagenode process handling uploads and downloads is running fine still with different PID and with uptime now of over 23 hours.
This node is configured with 4GB of RAM, so just checking if maybe someone can take a look at the recent changes to see if there maybe is some memory leak somewhere or it was just coincidence.
Thank you.
Edit: It even looks like it completed the GC and then it was killed.
2024-01-18T02:52:28Z INFO lazyfilewalker.gc-filewalker starting subprocess {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-01-18T02:52:28Z INFO lazyfilewalker.gc-filewalker subprocess started {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-01-18T02:52:28Z INFO lazyfilewalker.gc-filewalker.subprocess Database started {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode"}
2024-01-18T02:52:28Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker started {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "createdBefore": "2024-01-10T17:59:59Z", "bloomFilterSize": 2097155, "process": "storagenode"}
2024-01-18T07:16:09Z INFO lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "process": "storagenode", "piecesCount": 30105220, "piecesSkippedCount": 0}
2024-01-18T07:16:23Z INFO lazyfilewalker.gc-filewalker subprocess exited with status {"process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "status": -1, "error": "signal: killed"}
2024-01-18T07:16:23Z ERROR pieces lazyfilewalker failed {"process": "storagenode", "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:83\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkSatellitePiecesToTrash:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SatellitePiecesToTrash:555\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:325\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
[Thu Jan 18 07:16:22 2024] Tasks state (memory values in pages):
[Thu Jan 18 07:16:22 2024] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[Thu Jan 18 07:16:22 2024] [ 1013] 0 1013 4551 646 49152 0 0 systemd-udevd
[Thu Jan 18 07:16:22 2024] [ 1372] 0 1372 617 25 40960 0 0 acpid
[Thu Jan 18 07:16:22 2024] [ 1977] 0 1977 721 483 49152 0 0 dhcpcd
[Thu Jan 18 07:16:22 2024] [ 2079] 104 2079 458610 2399 212992 0 0 grok_exporter
[Thu Jan 18 07:16:22 2024] [ 2115] 315 2115 837 483 45056 0 0 lldpd
[Thu Jan 18 07:16:22 2024] [ 2117] 315 2117 811 363 45056 0 0 lldpd
[Thu Jan 18 07:16:22 2024] [ 2151] 0 2151 2560 129 57344 0 0 syslog-ng
[Thu Jan 18 07:16:22 2024] [ 2152] 0 2152 76675 762 94208 0 0 syslog-ng
[Thu Jan 18 07:16:22 2024] [ 2217] 0 2217 1876 1844 53248 0 0 ntpd
[Thu Jan 18 07:16:22 2024] [ 2247] 0 2247 3514 1146 61440 0 0 snmpd
[Thu Jan 18 07:16:22 2024] [ 2279] 0 2279 1733 560 53248 0 -1000 sshd
[Thu Jan 18 07:16:22 2024] [ 2307] 1001 2307 44058 4770 110592 0 0 python3
[Thu Jan 18 07:16:22 2024] [ 2343] 0 2343 733 151 45056 0 0 agetty
[Thu Jan 18 07:16:22 2024] [ 2344] 0 2344 733 152 49152 0 0 agetty
[Thu Jan 18 07:16:22 2024] [ 2345] 0 2345 733 141 49152 0 0 agetty
[Thu Jan 18 07:16:22 2024] [ 2346] 0 2346 733 151 40960 0 0 agetty
[Thu Jan 18 07:16:22 2024] [ 2347] 0 2347 733 139 45056 0 0 agetty
[Thu Jan 18 07:16:22 2024] [ 2348] 0 2348 733 152 49152 0 0 agetty
[Thu Jan 18 07:16:22 2024] [ 2412] 1001 2412 434705 78558 950272 0 0 storagenode
[Thu Jan 18 07:16:22 2024] [ 2455] 0 2455 973 470 45056 0 0 crond
[Thu Jan 18 07:16:22 2024] [ 14394] 1001 14394 1137798 823288 6754304 0 0 storagenode
[Thu Jan 18 07:16:22 2024] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=storagenode,pid=14394,uid=1001
[Thu Jan 18 07:16:22 2024] Out of memory: Killed process 14394 (storagenode) total-vm:4551192kB, anon-rss:3293152kB, file-rss:0kB, shmem-rss:0kB, UID:1001 pgtables:6596kB oom_score_adj:0