Node starts twice

Hi,

Each time my node restarts update the version, I see it kind of crash and then restart again. It looks like the new node is started while the old one has not been shut down completely.

...
2025-05-28T16:30:56Z	INFO	Got a signal from the OS: "interrupt"	{"Process": "storagenode"}
2025-05-28T16:30:56Z	INFO	piecestore	upload canceled	{"Process": "storagenode", "Piece ID": "VXLSELU3NLDVLWYQRR25K3NEURFJH64H5RMOMVGVXZKB7SJZLKRQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "172.17.0.1:52524", "Size": 589824}
...
some more up and downloads
...
2025-05-28T16:30:57Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2025-05-28T16:30:57Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2025-05-28T16:30:57Z	INFO	Operator email	{"Process": "storagenode", "Address": "EMAIL"}
2025-05-28T16:30:57Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "WALLET"}
2025-05-28T16:30:58Z	INFO	server	kernel support for tcp fast open unknown	{"Process": "storagenode"}
2025-05-28T16:30:58Z	INFO	hashstore	hashstore opened successfully	{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "open_time": "98.477798ms"}
2025-05-28T16:30:58Z	INFO	hashstore	hashstore opened successfully	{"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "open_time": "45.222292ms"}
2025-05-28T16:30:58Z	INFO	hashstore	hashstore opened successfully	{"Process": "storagenode", "satellite": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "open_time": "74.656321ms"}
2025-05-28T16:30:59Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1v1o2XhkVMq7tzMCLoRVKBHVVW8LxniHfP26Jz6mgy39QjdS41"}
2025-05-28T16:30:59Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "1v1o2XhkVMq7tzMCLoRVKBHVVW8LxniHfP26Jz6mgy39QjdS41"}
2025-05-28T16:30:59Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 62}
2025-05-28T16:30:59Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
2025-05-28T16:31:00Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
2025-05-28T16:31:00Z	INFO	Node 1v1o2XhkVMq7tzMCLoRVKBHVVW8LxniHfP26Jz6mgy39QjdS41 started	{"Process": "storagenode"}
2025-05-28T16:31:00Z	INFO	Public server started on [::]:28967	{"Process": "storagenode"}
2025-05-28T16:31:00Z	INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
2025-05-28T16:31:00Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
...
normal operation
...
2025-05-28T16:31:10Z	INFO	lazyfilewalker.used-space-filewalker	subprocess exited with status	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:774\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": false, "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	piecestore:cache	encountered error while computing space used by satellite	{"Process": "storagenode", "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "SatelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:10Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:10Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:10Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:774\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": false, "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	piecestore:cache	encountered error while computing space used by satellite	{"Process": "storagenode", "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "SatelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:10Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:10Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:10Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "context canceled"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:774\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": false, "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	piecestore:cache	encountered error while computing space used by satellite	{"Process": "storagenode", "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "SatelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:10Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:10Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:10Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:774\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": false, "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	ERROR	piecestore:cache	encountered error while computing space used by satellite	{"Process": "storagenode", "error": "filewalker: used_space_per_prefix_db: context canceled", "errorVerbose": "filewalker: used_space_per_prefix_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSpacePerPrefixDB).Get:81\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:96\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:783\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:85\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "SatelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:10Z	ERROR	piecestore:cache	error getting current used space for trash: 	{"Process": "storagenode", "error": "filestore error: failed to walk trash namespace af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e7190800000000: context canceled", "errorVerbose": "filestore error: failed to walk trash namespace af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e7190800000000: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).SpaceUsedForTrash:302\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:104\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2025-05-28T16:31:10Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "KSAVRZ43QRMMOMZR4UJ27B2VBVQHA3FCX6HRWNMNWKVVQYKRI6YQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 256, "Remote Address": "172.17.0.1:36524", "reason": "downloaded size (0 bytes) does not match received message size (256 bytes)"}
2025-05-28T16:31:43Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2025-05-28T16:31:43Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2025-05-28T16:31:43Z	INFO	Operator email	{"Process": "storagenode", "Address": "EMAIL"}
2025-05-28T16:31:43Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "WALLET"}
2025-05-28T16:31:43Z	INFO	server	kernel support for tcp fast open unknown	{"Process": "storagenode"}
2025-05-28T16:31:43Z	INFO	hashstore	hashstore opened successfully	{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "open_time": "10.511567ms"}
2025-05-28T16:31:43Z	INFO	hashstore	hashstore opened successfully	{"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "open_time": "3.81846ms"}
2025-05-28T16:31:43Z	INFO	hashstore	hashstore opened successfully	{"Process": "storagenode", "satellite": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "open_time": "3.941962ms"}
2025-05-28T16:31:43Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1v1o2XhkVMq7tzMCLoRVKBHVVW8LxniHfP26Jz6mgy39QjdS41"}
2025-05-28T16:31:43Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "1v1o2XhkVMq7tzMCLoRVKBHVVW8LxniHfP26Jz6mgy39QjdS41"}
2025-05-28T16:31:43Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 62}
2025-05-28T16:31:44Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	Node 1v1o2XhkVMq7tzMCLoRVKBHVVW8LxniHfP26Jz6mgy39QjdS41 started	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	Public server started on [::]:28967	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 0}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	piecemigrate:chore	all enqueued for migration; will sleep before next pooling	{"Process": "storagenode", "active": {}, "interval": "10m0s"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "4h48m7.123959025s"}
2025-05-28T16:31:45Z	INFO	bandwidth	Persisting bandwidth usage cache to db	{"Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.used-space-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "dateBefore": "2025-05-21T16:31:45Z"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "89.017846ms"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "dateBefore": "2025-05-21T16:31:45Z", "Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "66.362092ms"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "dateBefore": "2025-05-21T16:31:45Z"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "61.791066ms"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:45Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "OJJWJOT6CQARVL23DKYKOBUU77PJFUGFZTFXMSXNLBRQEHJ5PY7A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 1536, "Remote Address": "172.17.0.1:38034", "reason": "downloaded size (0 bytes) does not match received message size (1536 bytes)"}
2025-05-28T16:31:45Z	INFO	piecestore	download canceled	{"Process": "storagenode", "Piece ID": "U5LBLYQCYMZ72ALU2YSCKECVIZ3GPMGLQJ55XHGHZZIPVQ3VYNDQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 10752, "Remote Address": "172.17.0.1:38082", "reason": "downloaded size (0 bytes) does not match received message size (10752 bytes)"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "dateBefore": "2025-05-21T16:31:45Z"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0}
2025-05-28T16:31:45Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-05-28T16:31:45Z	INFO	pieces:trash	emptying trash finished	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "elapsed": "59.842035ms"}
2025-05-28T16:31:45Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "3MRMNUX4REGY5PNGTVBPFDD44YUN2U5CIOF26J3FXWGGPGNNGYJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 8192, "Remote Address": "172.17.0.1:38176"}
...normal operation

This has been going on for many months and doesn’t seem to do any damage. However, I would like to know what is going on and how I can avoid this?

Brgds

Hello @MarcVanWijk,
Welcome back!

Perhaps the node has been killed because of the timeout?

Hi Alexey,

I could not find any disk IO timeout. Also, this issue only occurrs during a version update. To me it looks like the running node (old version) is interrupted with a signal (Got a signal from the OS: “interrupt” {“Process”: “storagenode”}), the new versions spins up while the old version is still shutting down resulting in a crash of the new node. Finally the new node spins up again without any issue.

Environment:
System: Synology DS720+ with 18GB RAM
Disk: Seagate EXOX X20 20TB dedicated to node
Formatting: EXT4 default parameters on 4K native low level

1 Like

Hi,

It just happened again. My node was running normally and we got this:

2025-06-18T13:02:29Z    INFO    Got a signal from the OS: "interrupt"   {"Process": "storagenode"}

Immedeately after this, I can see that my node continued to up and download files. This must be the node that just got the interrupt (SIGINT I suppose):

2025-06-18T13:02:29Z    INFO    piecestore      downloaded
2025-06-18T13:02:29Z    INFO    piecestore      upload canceled
2025-06-18T13:02:29Z    INFO    piecestore      downloaded  
2025-06-18T13:02:29Z    INFO    piecestore      upload canceled
2025-06-18T13:02:29Z    INFO    piecestore      upload canceled
2025-06-18T13:02:29Z    INFO    piecestore      downloaded    
2025-06-18T13:02:29Z    INFO    piecestore      upload canceled
2025-06-18T13:02:29Z    INFO    piecestore      download canceled
2025-06-18T13:02:29Z    INFO    piecestore      downloaded      
2025-06-18T13:02:29Z    INFO    piecestore      downloaded      
2025-06-18T13:02:29Z    INFO    piecestore      downloaded      
2025-06-18T13:02:29Z    INFO    piecestore      upload canceled 

Then I see a new node starting up:

2025-06-18T13:02:31Z    INFO    Configuration loaded    {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2025-06-18T13:02:31Z    INFO    Anonymized tracing enabled      {"Process": "storagenode"}
2025-06-18T13:02:31Z    INFO    Operator email  {"Process": "storagenode", "Address": "email@domain.com"}
2025-06-18T13:02:31Z    INFO    Operator wallet {"Process": "storagenode", "Address": "mywallet"}
...

And once initialization is done (only takes 5 seconds) it starts working:

2025-06-18T13:02:34Z    INFO    piecestore      downloaded
2025-06-18T13:02:34Z    INFO    piecestore      downloaded
2025-06-18T13:02:34Z    INFO    piecestore      downloaded
2025-06-18T13:02:35Z    INFO    piecestore      downloaded
2025-06-18T13:02:35Z    INFO    piecestore      downloaded
2025-06-18T13:02:35Z    INFO    piecestore      downloaded
2025-06-18T13:02:35Z    INFO    piecestore      downloaded
...

Up to this point everything looks Ok, but then I see this:

2025-06-18T13:02:43Z    INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode"}

And everything starts throwing errors, but none FATAL. Finally, after may errors, I can see a pause of about 44 seconds and the node starts up again:

2025-06-18T13:03:27Z    INFO    Configuration loaded    {"Process": "storagenode", "Location": "/app/config/config.yaml"}
...

And now everything works smooth again.

Any idea what is going on?

Brgds

Did you check if its an Out Of Memorry (OOM) issue ?

1 Like

The suspicion is the opposite: that node after receiving the signal to quit takes some time to do so, and the updater does not wait for the process to die, and launches second instance, as the first one is still running.

Does the updater wait for the process to die before relaunching?

(Ideally, updater shall not be mission with processes and use service manager to restart the storagenode, but I gave up on expecting the updater to do the right thing; I have written my own)

No, haven´t got any of these since I upgraded from 2GB to 18GB. Also, disk utilization was below 5% at the time the restart kicked in.

That is what I suspect as wel. The node caught the signal and takes a long time to shut down. Meanwhile the new instance is spun up and the simultaneous execution causes the new instance to crash.

Is there any way to validate this? Perhaps changing the log levels?

If this turns out to be the case, is there a parameter I can tweak to have the updated wait for X seconds/minutes before initiating the new instance?

I would create an issue on github. Any delay will be wrong – updater shall wait for the process to exit.

Looking at the source – they just send the signal:

And move on with the life immediately.

It’s clearly a bug, and just happens to work on fast systems.

And the fact that they stop the process by sending it a signal, as opposed to using service manager is another issue. The latter approach would syncronosly stop the service, elimiating the problem. I do that in my updater:

1 Like