RPI went down - nodes wont come back

Hey - i got 6 nodes on a pi 5 that i cant get back to life.

first nodes logs:

2024-07-13T11:09:37Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z	ERROR	pieces	failed to lazywalk space used by satellite	{"Process": "storagenode", "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-13T11:09:37Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:187\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:422\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-07-13T11:09:37Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "context canceled"}
2024-07-13T11:09:37Z	ERROR	pieces	failed to lazywalk space used by satellite	{"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T11:09:37Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2024-07-13T11:09:37Z	ERROR	pieces	failed to lazywalk space used by satellite	{"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-13T11:09:37Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024-07-13T11:09:37Z	ERROR	pieces	failed to lazywalk space used by satellite	{"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-13T11:09:37Z	ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T11:09:38Z	ERROR	failure during run	{"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed) does not match running node's ID (126gxM4eda1jjtL39bBDTdQrbWw76d5rowEFyk2np6DgpbcQjS8)", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed) does not match running node's ID (126gxM4eda1jjtL39bBDTdQrbWw76d5rowEFyk2np6DgpbcQjS8)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:157\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed) does not match running node's ID (126gxM4eda1jjtL39bBDTdQrbWw76d5rowEFyk2np6DgpbcQjS8)
2024-07-13 11:09:38,452 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-07-13 11:09:38,455 INFO stopped: storagenode (exit status 1)
2024-07-13 11:09:38,457 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

second node:

2024-07-05T21:25:59Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "TT6X74Y5RHKVQFQCUDPPP5PIAPZ3I3N22ZV7U6VWJYHE5EUWUWBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 50944, "Remote Address": "82.165.221.198:51930"}
2024-07-05T21:25:59Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "2KE4OTF4FRBWJRVHKGWG6ATP32SDZWNQZ4PWGPVJUSDEIGUOVMNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:46262", "Available Space": 10477589737890}
2024-07-05T21:25:59Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "2KE4OTF4FRBWJRVHKGWG6ATP32SDZWNQZ4PWGPVJUSDEIGUOVMNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:46262", "Size": 1792}
2024-07-05T21:26:00Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "ALC3EU752ATW2SA6VQO7PSWR7AQYGNODJFJFDWF32P2U4F4MLHJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:58134", "Available Space": 10477589735586}
2024-07-05T21:26:00Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "ALC3EU752ATW2SA6VQO7PSWR7AQYGNODJFJFDWF32P2U4F4MLHJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:58134", "Size": 14592}
2024-07-05T21:26:00Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "ZLQG6PL4JJKS7ZRJAKBA5HD2TUSEB24ONL5FBDFT2TSKRWHJOPFQ", "Satellite ID": "12EayRS2V1kEsWESU9Q^C
2024-07-05T21:26:05Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "UYEZQKFDXTI3GA5KW3DRNOSUO2VC6BX3R56FY4SR7P5EJTVA4VYA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "82.165.221.198:42726", "Size": 761856}
2024-07-05T21:26:07Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "REJU5QSIBRKV3DN4P6353MSJMI5NQDNZJ6BBT4TJJR3OYMFFWBDA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:54154", "Available Space": 10477588179618}
2024-07-05T21:26:07Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "REJU5QSIBRKV3DN4P6353MSJMI5NQDNZJ6BBT4TJJR3OYMFFWBDA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:54154", "Size": 2048}
2024-07-05T21:26:08Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "QGPAAOZUKDBW3DITQJB44P33HZVOS2ZW7NF7PCYKTPATYUCVL2NQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:48518", "Available Space": 10477588177058}
2024-07-05T21:26:08Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "QGPAAOZUKDBW3DITQJB44P33HZVOS2ZW7NF7PCYKTPATYUCVL2NQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:48518", "Size": 512}
2024-07-05T21:26:09Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "OJBCT2PJLEUHODMTR7DS5AOTDXRDP6TFXZPDNEYW4WKC5YUDRIQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45836", "Available Space": 10477588176034}
2024-07-05T21:26:09Z	INFO	piecestore	upload canceled (race lost or node shutdown)	{"Process": "storagenode", "Piece ID": "OJBCT2PJLEUHODMTR7DS5AOTDXRDP6TFXZPDNEYW4WKC5YUDRIQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45836"}
2024-07-05T21:26:09Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "FSLLJPUUNQGGAH5IDJE74MRQGTCI36AW6CQE5FRUNIK7XQXF4XVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45718", "Available Space": 10477587884194}
2024-07-05T21:26:09Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "FSLLJPUUNQGGAH5IDJE74MRQGTCI36AW6CQE5FRUNIK7XQXF4XVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45718", "Size": 50176}
2024-07-05T21:26:10Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "PIZ7LUSPPS4NOZB5T6YUZYN5ZNWSBTUR3VEXHNRCCNCXREUDYE4Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "82.165.221.198:49720", "Available Space": 10477587833506}
2024-07-05T21:26:10Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "NNFAYSMTB3NFFDHFNLJTZSBE7KKCMH3NNYVJLH5SL3NZ4YJDZRFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 144896, "Size": 36352, "Remote Address": "82.165.221.198:56840"}
2024-07-05T21:26:10Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "NNFAYSMTB3NFFDHFNLJTZSBE7KKCMH3NNYVJLH5SL3NZ4YJDZRFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 144896, "Size": 36352, "Remote Address": "82.165.221.198:56840"}
2024-07-05T21:26:11Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "PIZ7LUSPPS4NOZB5T6YUZYN5ZNWSBTUR3VEXHNRCCNCXREUDYE4Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "82.165.221.198:49720", "Size": 98048}
2024-07-05T21:26:11Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "DHRLCQEFASUCVAKE6QQR3INWOKZ66WRKTIVZ4QRFBZSHOUPVELBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:35782", "Available Space": 10477587734946}
2024-07-05T21:26:12Z	INFO	piecestore	upload canceled (race lost or node shutdown)	{"Process": "storagenode", "Piece ID": "DHRLCQEFASUCVAKE6QQR3INWOKZ66WRKTIVZ4QRFBZSHOUPVELBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:35782"}
2024-07-05T21:26:12Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "R6QPAAEBL4IUGEUNXRICOKDR64ONSSQT52FVH3PN5VB2W4WIZAUQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 10752, "Remote Address": "82.165.221.198:47552"}
2024-07-05T21:26:12Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "Q6PADD74HOPQVZTH4PL36HVYUSWJWJ723HQD22W7NZBSGZNMAWZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:47878", "Available Space": 10477587364770}

its like node 1 has a differnt problems than the other 5 nodes.
the other 5 just keeps rebooting and give logs like:

2024-07-04T20:49:26Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "HZYIPRWQIMJGGMWOSNH5MQUDBOLXLADE3WS3UMOU56D35NUUMNUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:47462", "Available Space": 10496631893394}
2024-07-04T20:49:26Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "HZYIPRWQIMJGGMWOSNH5MQUDBOLXLADE3WS3UMOU56D35NUUMNUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:47462", "Size": 1280}
2024-07-04T20:49:27Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "R5CTDZGYJNBAQF3BZ77T4XD4UDYRST4QRBZVFBLJB5GR6R6DC42Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:35394", "Available Space": 10496631891602}
2024-07-04T20:49:27Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "PJXO2WDTXFBXM6J6G2B3AFANH4S5SCHBBLEN6T2GXPO3ZRGEJGVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:53988", "Available Space": 10496631891602}

i have tried to debug it without success- anyone that can help or see the issue?

Have you tried to fix :point_up: this ?

no - im not sure what is happend tho?
its saying im using the wrong nodes id? im pretty sure im not tho…

You substituted the wrong identity folder, it has not recognized the related data.

its in the same location as before - i noted down the exact run command i used when i set it up and i have not moved the files?
im confussed.

setup:

docker run --rm -e SETUP=“true”
–user $(id -u):$(id -g)
–mount type=bind,source=“/home/andreashg/Desktop/Storj/Identity/storagenode”,destination=/app/identity
–mount type=bind,source=“/mnt/sdf1/StorjData”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

run command:
docker run -d --restart unless-stopped --stop-timeout 300
-p 3000:28967/tcp
-p 3000:28967/udp
-p 14002:14002
-e WALLET=“temp”
-e EMAIL=“StorjGuide@lortemail.dk”
-e ADDRESS=“rightone:3000”
-e STORAGE=“10.5TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/home/andreashg/Desktop/Storj/Identity/storagenode”,destination=/app/identity
–mount type=bind,source=“/mnt/sdf1/StorjData”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

and real path:
/home/andreashg/Desktop/Storj/Identity/storagenode

photo:

Please hide your personal info like email, wallet address and DDNS. Personally, since you have 6 nodes, I would have gone with naming them from 1 through 6 :slight_smile:

2 Likes

email is a temp one.

words

i really cannot understand that its the wrong identity - im 99% sure im still tkaing the correct folder - also a reboot should not cause this?

storagenode2 current logs - just restart constantly:

{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Piece ID": "L4CCY5VNMI2UXX67DHCKGM374EFCW2PDCWH76OKECUGUH5EKBIHA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	WARN	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Piece ID": "UNRJ3MPAISWJJYB3EWPDOCZW5S3EMEVZH6UOUWYMTU3II6CS7FCQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	WARN	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Piece ID": "LIJTGP7GG5OEBFYKZ45JWCE3N6UGCGJYCGOEZ4WAE3S44CRDNUGQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "4h41m44.152827199s"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-13T12:08:17Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T12:08:17Z	ERROR	services	unexpected shutdown of a runner	{"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM) does not match running node's ID (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed)", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM) does not match running node's ID (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:157\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	INFO	bandwidth	Persisting bandwidth usage cache to db	{"Process": "storagenode"}
2024-07-13T12:08:17Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "3L3MHRCTJEAPERZPOJUHGR6WVXATSNAIG7I5D3NIAKYQ2BIUXI5A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 11264, "Remote Address": "82.165.230.26:60900"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess exited with status	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2024-07-13T12:08:17Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:187\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:423\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-07-13T12:08:17Z	WARN	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "N5OUXIEJZUNNOM7PHZRC4KEKEVQO3T42QVT4M3SFJVNFZPL4GMVA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	collector	error during collecting pieces: 	{"Process": "storagenode", "error": "pieces error: context canceled", "errorVerbose": "pieces error: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:578\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"Process": "storagenode", "error": "context canceled"}
2024-07-13T12:08:17Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "XALVAOKKEUO54V3ZS4LMZMZOPSGVE4XLGXTIKGTAOYSTY6EET76Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 8448, "Remote Address": "82.165.230.26:60898"}
2024-07-13T12:08:17Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "XALVAOKKEUO54V3ZS4LMZMZOPSGVE4XLGXTIKGTAOYSTY6EET76Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 8448, "Remote Address": "82.165.230.26:60898", "error": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:140\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:641\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-13T12:08:17Z	ERROR	version	failed to get process version info	{"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tstorj.io/storj/storagenode/version.(*Chore).checkVersion:115\n\tstorj.io/storj/storagenode/version.(*Chore).RunOnce:71\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T12:08:17Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T12:08:17Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T12:08:17Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-13T12:08:17Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "3L3MHRCTJEAPERZPOJUHGR6WVXATSNAIG7I5D3NIAKYQ2BIUXI5A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 11264, "Remote Address": "82.165.230.26:60900", "error": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:140\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:62\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:641\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/common/experiment.(*Handler).HandleRPC:42\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:167\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:109\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:157\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-07-13T12:08:17Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T12:08:17Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-13T12:08:17Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T12:08:17Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.used-space-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.used-space-filewalker	subprocess exited with status	{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "status": -1, "error": "signal: killed"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:712\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-13T12:08:17Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:712\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-13T12:08:17Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:712\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	INFO	pieces	used-space-filewalker started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-13T12:08:17Z	INFO	lazyfilewalker.used-space-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-13T12:08:17Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:712\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	pieces	used-space-filewalker failed	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": false, "error": "filewalker: context canceled", "errorVerbose": "filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:17Z	ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:721\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T12:08:18Z	ERROR	failure during run	{"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM) does not match running node's ID (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed)", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM) does not match running node's ID (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:157\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM) does not match running node's ID (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed)
2024-07-13 12:08:18,679 INFO exited: storagenode (exit status 1; not expected)
2024-07-13 12:08:19,682 INFO spawned: 'storagenode' with pid 52
2024-07-13 12:08:19,683 WARN received SIGQUIT indicating exit request
2024-07-13 12:08:19,684 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-07-13T12:08:19Z	INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode-updater"}
2024-07-13 12:08:19,685 INFO stopped: storagenode-updater (exit status 0)
2024-07-13T12:08:19Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-07-13T12:08:19Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-07-13T12:08:19Z	INFO	Operator email	{"Process": "storagenode", "Address": "StorjGuide@lortemail.dk"}
2024-07-13T12:08:19Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xAA5aF9AE962737aa434a9eDf019cb41884686686"}
2024-07-13T12:08:19Z	INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
2024-07-13T12:08:19Z	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
2024-07-13T12:08:20Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed"}
2024-07-13T12:08:20Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed"}
2024-07-13T12:08:20Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 60}

i still see error with readability - yet i can easily speed test the disks with 150MB/s - so disks are completly fine.

Perhaps. But somehow you provided a wrong identity for this data. There is no other way.

that was a mistake. A big one. This command must be executed only once for the entire identity’s life. Like - never again.

Now, I’m not sure, can you figure out the correct mix of the identity and its data. I would suggest to try to try to substitute them one by one, until you figure out, which is the correct one.

Okay - what can i do from here to save these nodes?
how can i validate what identity i need to use?

okay - i will give it a try with a single node at a time trying to get it running.

I do not not know. I may only suggest to try each of them in hope that one is would match.
After that I would suggest to move this identity to that disk, to do not hit this again.

running this:

docker run -d --restart unless-stopped --stop-timeout 300 -p 3000:28967/tcp -p 3000:28967/udp -p 14002:14002 -e WALLET=“0xxx” -e EMAIL=“StorjGuide@lortemail.dk” -e ADDRESS=“xx:3000” -e STORAGE=“10.5TB” --user $(id -u):$(id -g) --mount type=bind,source=“/home/andreashg/Desktop/Storj/Identity/storagenode2”,destination=/app/identity --mount type=bind,source=“/mnt/sdf1/StorjData”,destination=/app/config --name storagenode storjlabs/storagenode:latest
3557e54457f9668409c916246e973e722a399253adecb2e6561b62135ebe261b

gives me a node that runs but looks like this:

and:

so it thinks its a brand new node - even tho it has been running for weeks - is this because of missing database files so it will be normal after a file walker or is it starting a brand new node with this identity?

after a minute or two now looks like this:

this seems to be that node1 now gets data from a completly different node now - it seems weird.

a node now has version mix match?

2024-07-13 12:50:59,547 INFO exited: storagenode (exit status 1; not expected)
2024-07-13 12:51:00,548 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-07-13 12:51:00,550 INFO spawned: 'storagenode' with pid 43
2024-07-13 12:51:00,551 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-07-13T12:51:00Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-07-13T12:51:00Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-07-13T12:51:00Z	INFO	Operator email	{"Process": "storagenode", "Address": "StorjGuide@lortemail.dk"}
2024-07-13T12:51:00Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xAA5aF9AE962737aa434a9eDf019cb41884686686"}
2024-07-13T12:51:00Z	INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
2024-07-13T12:51:00Z	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
2024-07-13T12:51:00Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM"}
2024-07-13T12:51:00Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM"}
2024-07-13T12:51:01Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 57}
2024-07-13T12:51:01Z	ERROR	failure during run	{"Process": "storagenode", "error": "Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60\n\tstorj.io/storj/private/migrate.(*Migration).ValidateVersions:141\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CheckVersion:671\n\tmain.cmdRun:103\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267", "errorVerbose": "Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60\n\tstorj.io/storj/private/migrate.(*Migration).ValidateVersions:141\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CheckVersion:671\n\tmain.cmdRun:103\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267\n\tmain.cmdRun:105\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267"}
Error: Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60
	storj.io/storj/private/migrate.(*Migration).ValidateVersions:141
	storj.io/storj/storagenode/storagenodedb.(*DB).CheckVersion:671
	main.cmdRun:103
	main.newRunCmd.func1:33
	storj.io/common/process.cleanup.func1.4:393
	storj.io/common/process.cleanup.func1:411
	github.com/spf13/cobra.(*Command).execute:983
	github.com/spf13/cobra.(*Command).ExecuteC:1115
	github.com/spf13/cobra.(*Command).Execute:1039
	storj.io/common/process.ExecWithCustomOptions:112
	main.main:34
	runtime.main:267
2024-07-13 12:51:01,018 INFO exited: storagenode (exit status 1; not expected)
2024-07-13 12:51:03,021 INFO spawned: 'storagenode' with pid 53
2024-07-13T12:51:03Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-07-13T12:51:03Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-07-13T12:51:03Z	INFO	Operator email	{"Process": "storagenode", "Address": "StorjGuide@lortemail.dk"}
2024-07-13T12:51:03Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xAA5aF9AE962737aa434a9eDf019cb41884686686"}
2024-07-13T12:51:03Z	INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
2024-07-13T12:51:03Z	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
2024-07-13T12:51:03Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM"}
2024-07-13T12:51:03Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "1W6Y76cCzeHHqfUw2SNpkWCBrf7NVsBxJSQrtxNHvicctSwPZM"}
2024-07-13T12:51:03Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 57}
2024-07-13T12:51:03Z	ERROR	failure during run	{"Process": "storagenode", "error": "Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60\n\tstorj.io/storj/private/migrate.(*Migration).ValidateVersions:141\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CheckVersion:671\n\tmain.cmdRun:103\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267", "errorVerbose": "Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60\n\tstorj.io/storj/private/migrate.(*Migration).ValidateVersions:141\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).CheckVersion:671\n\tmain.cmdRun:103\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267\n\tmain.cmdRun:105\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267"}
Error: Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60
	storj.io/storj/private/migrate.(*Migration).ValidateVersions:141
	storj.io/storj/storagenode/storagenodedb.(*DB).CheckVersion:671
	main.cmdRun:103
	main.newRunCmd.func1:33
	storj.io/common/process.cleanup.func1.4:393
	storj.io/common/process.cleanup.func1:411
	github.com/spf13/cobra.(*Command).execute:983
	github.com/spf13/cobra.(*Command).ExecuteC:1115
	github.com/spf13/cobra.(*Command).Execute:1039
	storj.io/common/process.ExecWithCustomOptions:112
	main.main:34
	runtime.main:267

curently 4 out of 6 nodes are running good.

the last two nodes both have the errors:

Error: Error checking version for storagenode database: validate db version mismatch: expected 26, but current version is 60