1.109.2 crashed my node

My node runs as Docker on my synology. I had no problems until 2 days ago. Since the update to 1.109.2 the node hangs every 60-70 minutes. After a while it seems to go back to normal until this game starts again from the beginning. During the period when it is rebooted, the logs look normal. After about 60 minutes the traffic stops and I get some errors in the log. I have already removed the node and reinstalled it. Hard drive has no problems. I’m at a loss.

2024/08/03 12:31:02	stdout	2024-08-03 10:31:02,543 INFO waiting for storagenode, processes-exit-eventlistener to die
2024/08/03 12:30:59	stdout	2024-08-03 10:30:59,540 INFO waiting for storagenode, processes-exit-eventlistener to die
2024/08/03 12:30:59	stdout	2024-08-03T10:30:59Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "ZHNJYH4RQGBZMWB2M2Z77PUC7TE3FRXTMACL7MIVRNXZ35CJIEJQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:47766", "Size": 720896}
2024/08/03 12:30:59	stdout	2024-08-03T10:30:59Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "JKRUY5V64MU6HQSPA37FSVWIXKQFFM6YFXR6WRAQDEHLD6GQZYVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:46868", "Size": 4096}
2024/08/03 12:30:59	stdout	2024-08-03T10:30:59Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "WU5VO4WXIY5HTFF5TQIUQX63OLAD2R44OW5TSNA5LGD6QRC5POOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:48736", "Size": 1441792}
2024/08/03 12:30:59	stdout	2024-08-03T10:30:59Z INFO piecestore upload canceled {"Process": "storagenode", "Piece ID": "PBL5WBDH6DUAA6MFLJ6QEKDYNY2KMLTPDZW5YGKZXXOI5NOIWTKQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "172.17.0.1:47386", "Size": 1441792}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR piecestore:cache error getting current used space:  {"Process": "storagenode", "error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "NOO6XOO7VV3NBU6HQ56OZEE64EQXZHSMYQEGESJNN77XHH4PRI4Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:46352", "Size": 14848}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "74KCDIKVOCGOG54VO667A3CHDJ3ED6DQOY56G4JZ56ZMBPUSDCKQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "172.17.0.1:58716", "Size": 4203520}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:715\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:715\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR retain retain pieces failed {"Process": "storagenode", "cachePath": "config/retain", "error": "retain: filewalker: context canceled", "errorVerbose": "retain: filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:181\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:572\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:379\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:265\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR filewalker failed to get progress from database {"Process": "storagenode"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z ERROR pieces lazyfilewalker failed {"Process": "storagenode", "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkSatellitePiecesToTrash:160\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:565\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:379\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:265\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:56	stdout	2024-08-03T10:30:56Z INFO lazyfilewalker.gc-filewalker subprocess exited with status {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "status": -1, "error": "signal: killed"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:715\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:724\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z ERROR pieces used-space-filewalker failed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:715\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO lazyfilewalker.used-space-filewalker subprocess exited with status {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "status": -1, "error": "signal: killed"}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO Got a signal from the OS: "terminated" {"Process": "storagenode"}
2024/08/03 12:30:55	stdout	2024-08-03 10:30:55,872 INFO stopped: storagenode-updater (exit status 0)
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO Got a signal from the OS: "terminated" {"Process": "storagenode-updater"}
2024/08/03 12:30:55	stdout	2024-08-03 10:30:55,855 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024/08/03 12:30:55	stdout	2024-08-03 10:30:55,818 WARN received SIGTERM indicating exit request
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "JKRUY5V64MU6HQSPA37FSVWIXKQFFM6YFXR6WRAQDEHLD6GQZYVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:46868", "Available Space": 6169998061520}
2024/08/03 12:30:55	stdout	2024-08-03T10:30:55Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "NOO6XOO7VV3NBU6HQ56OZEE64EQXZHSMYQEGESJNN77XHH4PRI4Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:46352", "Available Space": 6170002265552}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "74KCDIKVOCGOG54VO667A3CHDJ3ED6DQOY56G4JZ56ZMBPUSDCKQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT", "Remote Address": "172.17.0.1:58716", "Available Space": 6170002265552}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "LQ5PQU7XRFO4Z7K2ACLFFMDRWQ6AEODZNTFAAGP2MZYTYH3BYMBQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 430848, "Size": 9984, "Remote Address": "172.17.0.1:46700"}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "LQ5PQU7XRFO4Z7K2ACLFFMDRWQ6AEODZNTFAAGP2MZYTYH3BYMBQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "Offset": 430848, "Size": 9984, "Remote Address": "172.17.0.1:46700"}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "GRPKWO7A6SNYK7HX2SAE6Z7BSQWZGMGAAALWCTHWX43CUFVWSHZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:59020", "Size": 55808}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "GRPKWO7A6SNYK7HX2SAE6Z7BSQWZGMGAAALWCTHWX43CUFVWSHZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:59020", "Available Space": 6170002321872}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "GLC7L2P2CC3QFI5JY2GEMAH6RC4AW3C5YHAC27YXEW5YWRFP5OXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:47196", "Size": 6912}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "N2S4TH2FB3ZRB7P74WDQHM42GHUZGP2ZXJFWMKCSU25HO7IEPIBA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:52994", "Size": 58368}
2024/08/03 12:30:54	stdout	2024-08-03T10:30:54Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "GLC7L2

Seems the updater is terminated the node. All errors between these two log lines is the consequence of the termination process.

Please search for FATAL and/or Unrecoverable errors in your logs.

I searched the log file created between a reboot and the abrupt end and couldn’t find a fatal or unrecoverable error.At the beginning I always get the message Warn collector unable to delete piece

This warn is happening because the piece with TTL has been moved by the garbage collector to the trash before it’s expired. So kind of normal.
However, it shouldn’t crash the node. Something different is happening. The confusing thing about your logs, that they have a reverse order. For example, this is the first message before termination:

and then the last one:

So, it looks like an external request.
Is it possible, that it was OOM killed?
So far it looks like a request from the storagenode-updater, but there is no messages about that in the logs. So, actually the OS requested to stop the container.
Unless you also have messages before that from the storagenode-updater somewhere earlier.

This time I stopped the node because the hourly warning messages from my Synology were starting to annoy me.
At that point, the node was no longer responsive and no more entries were written to the log file.

Regarding the storagenode-updater I searched the log
2024-08-03 10:30:55,872 INFO stopped: storagenode-updater (exit status 0)
2024-08-03T10:30:55Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2024-08-03 10:30:55,855 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-08-03T03:33:35Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
2024/08/03 05:33:35 stdout 2024-08-03T03:33:35Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.109.2”}
2024/08/03 05:33:35 stdout 2024-08-03T03:33:35Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2024/08/03 05:33:35 stdout 2024-08-03T03:33:35Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.109.2”}
2024/08/03 05:33:34 stdout 2024-08-03T03:33:34Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}

What warnings? About the node going offline?

this is usually related to hardware issues. I would suggest to check the disk for issues and fix them.

The messages I receive from Synology only say that the Docker application has stopped.

I have now started the node again and at the same time checked the status of the hard drive again.
I had started a hard drive defragmentation 3 days ago… Could that cause a problem. Actually, I can’t imagine that

It shouldn’t. However, from the available logs it looks like the OS stopped the container for whatever reason. If you run the container with the --restart unless-stopped, then you can change it to --restart always. However, it’s better to figure out, why Synology decided to stop the container.

1 Like