Hey - i got 6 nodes on a pi 5 that i cant get back to life.
first nodes logs:
2024-07-13T11:09:37Z ERROR contact:service ping satellite failed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z ERROR contact:service ping satellite failed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z ERROR contact:service ping satellite failed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z ERROR contact:service ping satellite failed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-07-13T11:09:37Z ERROR pieces failed to lazywalk space used by satellite {"Process": "storagenode", "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-13T11:09:37Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:187\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:422\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-07-13T11:09:37Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "context canceled"}
2024-07-13T11:09:37Z ERROR pieces failed to lazywalk space used by satellite {"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-13T11:09:37Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2024-07-13T11:09:37Z ERROR pieces failed to lazywalk space used by satellite {"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-13T11:09:37Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024-07-13T11:09:37Z ERROR pieces failed to lazywalk space used by satellite {"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:707\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-13T11:09:37Z ERROR piecestore:cache error getting current used space: {"Process": "storagenode", "error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:716\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-13T11:09:38Z ERROR failure during run {"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed) does not match running node's ID (126gxM4eda1jjtL39bBDTdQrbWw76d5rowEFyk2np6DgpbcQjS8)", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed) does not match running node's ID (126gxM4eda1jjtL39bBDTdQrbWw76d5rowEFyk2np6DgpbcQjS8)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:157\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mtGb19qCNzieJHNyP8zLSBuhrUCB2qstW1Ft55QEeK1KUwZed) does not match running node's ID (126gxM4eda1jjtL39bBDTdQrbWw76d5rowEFyk2np6DgpbcQjS8)
2024-07-13 11:09:38,452 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-07-13 11:09:38,455 INFO stopped: storagenode (exit status 1)
2024-07-13 11:09:38,457 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)
second node:
2024-07-05T21:25:59Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "TT6X74Y5RHKVQFQCUDPPP5PIAPZ3I3N22ZV7U6VWJYHE5EUWUWBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 50944, "Remote Address": "82.165.221.198:51930"}
2024-07-05T21:25:59Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "2KE4OTF4FRBWJRVHKGWG6ATP32SDZWNQZ4PWGPVJUSDEIGUOVMNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:46262", "Available Space": 10477589737890}
2024-07-05T21:25:59Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "2KE4OTF4FRBWJRVHKGWG6ATP32SDZWNQZ4PWGPVJUSDEIGUOVMNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:46262", "Size": 1792}
2024-07-05T21:26:00Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "ALC3EU752ATW2SA6VQO7PSWR7AQYGNODJFJFDWF32P2U4F4MLHJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:58134", "Available Space": 10477589735586}
2024-07-05T21:26:00Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "ALC3EU752ATW2SA6VQO7PSWR7AQYGNODJFJFDWF32P2U4F4MLHJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:58134", "Size": 14592}
2024-07-05T21:26:00Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "ZLQG6PL4JJKS7ZRJAKBA5HD2TUSEB24ONL5FBDFT2TSKRWHJOPFQ", "Satellite ID": "12EayRS2V1kEsWESU9Q^C
2024-07-05T21:26:05Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "UYEZQKFDXTI3GA5KW3DRNOSUO2VC6BX3R56FY4SR7P5EJTVA4VYA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "82.165.221.198:42726", "Size": 761856}
2024-07-05T21:26:07Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "REJU5QSIBRKV3DN4P6353MSJMI5NQDNZJ6BBT4TJJR3OYMFFWBDA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:54154", "Available Space": 10477588179618}
2024-07-05T21:26:07Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "REJU5QSIBRKV3DN4P6353MSJMI5NQDNZJ6BBT4TJJR3OYMFFWBDA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:54154", "Size": 2048}
2024-07-05T21:26:08Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "QGPAAOZUKDBW3DITQJB44P33HZVOS2ZW7NF7PCYKTPATYUCVL2NQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:48518", "Available Space": 10477588177058}
2024-07-05T21:26:08Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "QGPAAOZUKDBW3DITQJB44P33HZVOS2ZW7NF7PCYKTPATYUCVL2NQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:48518", "Size": 512}
2024-07-05T21:26:09Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "OJBCT2PJLEUHODMTR7DS5AOTDXRDP6TFXZPDNEYW4WKC5YUDRIQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45836", "Available Space": 10477588176034}
2024-07-05T21:26:09Z INFO piecestore upload canceled (race lost or node shutdown) {"Process": "storagenode", "Piece ID": "OJBCT2PJLEUHODMTR7DS5AOTDXRDP6TFXZPDNEYW4WKC5YUDRIQQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45836"}
2024-07-05T21:26:09Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "FSLLJPUUNQGGAH5IDJE74MRQGTCI36AW6CQE5FRUNIK7XQXF4XVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45718", "Available Space": 10477587884194}
2024-07-05T21:26:09Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "FSLLJPUUNQGGAH5IDJE74MRQGTCI36AW6CQE5FRUNIK7XQXF4XVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:45718", "Size": 50176}
2024-07-05T21:26:10Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "PIZ7LUSPPS4NOZB5T6YUZYN5ZNWSBTUR3VEXHNRCCNCXREUDYE4Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "82.165.221.198:49720", "Available Space": 10477587833506}
2024-07-05T21:26:10Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "NNFAYSMTB3NFFDHFNLJTZSBE7KKCMH3NNYVJLH5SL3NZ4YJDZRFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 144896, "Size": 36352, "Remote Address": "82.165.221.198:56840"}
2024-07-05T21:26:10Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "NNFAYSMTB3NFFDHFNLJTZSBE7KKCMH3NNYVJLH5SL3NZ4YJDZRFA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 144896, "Size": 36352, "Remote Address": "82.165.221.198:56840"}
2024-07-05T21:26:11Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "PIZ7LUSPPS4NOZB5T6YUZYN5ZNWSBTUR3VEXHNRCCNCXREUDYE4Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "82.165.221.198:49720", "Size": 98048}
2024-07-05T21:26:11Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "DHRLCQEFASUCVAKE6QQR3INWOKZ66WRKTIVZ4QRFBZSHOUPVELBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:35782", "Available Space": 10477587734946}
2024-07-05T21:26:12Z INFO piecestore upload canceled (race lost or node shutdown) {"Process": "storagenode", "Piece ID": "DHRLCQEFASUCVAKE6QQR3INWOKZ66WRKTIVZ4QRFBZSHOUPVELBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:35782"}
2024-07-05T21:26:12Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "R6QPAAEBL4IUGEUNXRICOKDR64ONSSQT52FVH3PN5VB2W4WIZAUQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 10752, "Remote Address": "82.165.221.198:47552"}
2024-07-05T21:26:12Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "Q6PADD74HOPQVZTH4PL36HVYUSWJWJ723HQD22W7NZBSGZNMAWZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:47878", "Available Space": 10477587364770}
its like node 1 has a differnt problems than the other 5 nodes.
the other 5 just keeps rebooting and give logs like:
2024-07-04T20:49:26Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "HZYIPRWQIMJGGMWOSNH5MQUDBOLXLADE3WS3UMOU56D35NUUMNUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:47462", "Available Space": 10496631893394}
2024-07-04T20:49:26Z INFO piecestore uploaded {"Process": "storagenode", "Piece ID": "HZYIPRWQIMJGGMWOSNH5MQUDBOLXLADE3WS3UMOU56D35NUUMNUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:47462", "Size": 1280}
2024-07-04T20:49:27Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "R5CTDZGYJNBAQF3BZ77T4XD4UDYRST4QRBZVFBLJB5GR6R6DC42Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:35394", "Available Space": 10496631891602}
2024-07-04T20:49:27Z INFO piecestore upload started {"Process": "storagenode", "Piece ID": "PJXO2WDTXFBXM6J6G2B3AFANH4S5SCHBBLEN6T2GXPO3ZRGEJGVQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "82.165.221.198:53988", "Available Space": 10496631891602}
i have tried to debug it without success- anyone that can help or see the issue?