Three days ago, I notified my docker container to keep restarting. I’ve run this node for 11 months, and have had no issues with it until now.
Is there any way I could bring this node back again?
2024-09-24 09:30:10,561 INFO Set uid to user 0 succeeded
2024-09-24 09:30:10,564 INFO RPC interface 'supervisor' initialized
2024-09-24 09:30:10,564 INFO supervisord started with pid 1
2024-09-24 09:30:11,566 INFO spawned: 'processes-exit-eventlistener' with pid 9
2024-09-24 09:30:11,568 INFO spawned: 'storagenode' with pid 10
2024-09-24 09:30:11,568 INFO spawnerr: command at '/app/config/bin/storagenode-updater' is not executable
2024-09-24T09:30:11Z INFO Configuration loaded {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-09-24T09:30:11Z INFO Anonymized tracing enabled {"Process": "storagenode"}
2024-09-24T09:30:11Z INFO Operator email {"Process": "storagenode", "Address": "***"}
2024-09-24T09:30:11Z INFO Operator wallet {"Process": "storagenode", "Address": "***"}
2024-09-24 09:30:12,681 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-09-24 09:30:12,681 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-09-24 09:30:12,681 INFO spawnerr: command at '/app/config/bin/storagenode-updater' is not executable
2024-09-24T09:30:14Z INFO server kernel support for tcp fast open unknown {"Process": "storagenode"}
2024-09-24T09:30:14Z INFO Telemetry enabled {"Process": "storagenode", "instance ID": "1XT***"}
2024-09-24T09:30:14Z INFO Event collection enabled {"Process": "storagenode", "instance ID": "1XT***"}
2024-09-24 09:30:14,724 INFO spawnerr: command at '/app/config/bin/storagenode-updater' is not executable
2024-09-24T09:30:16Z INFO db.migration Database Version {"Process": "storagenode", "version": 61}
2024-09-24T09:30:17Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock. {"Process": "storagenode"}
2024-09-24T09:30:17Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock. {"Process": "storagenode"}
2024-09-24 09:30:17,909 INFO spawnerr: command at '/app/config/bin/storagenode-updater' is not executable
2024-09-24 09:30:17,909 INFO gave up: storagenode-updater entered FATAL state, too many start retries too quickly
2024-09-24T09:30:17Z INFO Node 1XT*** started {"Process": "storagenode"}
2024-09-24T09:30:17Z INFO Public server started on [::]:28967 {"Process": "storagenode"}
2024-09-24T09:30:17Z INFO Private server started on 127.0.0.1:7778 {"Process": "storagenode"}
2024-09-24T09:30:17Z INFO trust Scheduling next refresh {"Process": "storagenode", "after": "5h52m48.801279588s"}
2024-09-24T09:30:17Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "C4EOZECC3VP7VAOIE2QEUXFMNYAH2RDXSEMBYXWFEQQQOD6NMDJQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 362240, "Size": 181504, "Remote Address": "172.17.0.1:46124"}
2024-09-24 09:30:17,910 WARN received SIGQUIT indicating exit request
2024-09-24 09:30:17,910 INFO waiting for processes-exit-eventlistener, storagenode to die
2024-09-24T09:30:17Z INFO collector expired pieces collection started {"Process": "storagenode"}
2024-09-24T09:30:17Z INFO pieces:trash emptying trash started {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-24T09:30:17Z INFO retain Prepared to run a Retain request. {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-09-13T17:59:59Z", "Filter Size": 16267347, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-09-24T09:30:17Z INFO bandwidth Persisting bandwidth usage cache to db {"Process": "storagenode"}
2024-09-24T09:30:17Z INFO lazyfilewalker.trash-cleanup-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-09-24T09:30:17Z ERROR lazyfilewalker.trash-cleanup-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024-09-24T09:30:17Z ERROR servers unexpected shutdown of a runner {"Process": "storagenode", "name": "debug", "error": "debug: listener closed", "errorVerbose": "debug: listener closed\n\tstorj.io/drpc/drpcmigrate.init:17\n\truntime.doInit1:7176\n\truntime.doInit:7143\n\truntime.main:253"}
2024-09-24T09:30:17Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: lazyfilewalker: context canceled", "errorVerbose": "pieces error: lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:195\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:436\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-09-24T09:30:17Z ERROR gracefulexit:chore error retrieving satellites. {"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:197\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:59\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:55\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:48\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:17Z ERROR piecestore:cache error during init space usage db: {"Process": "storagenode", "error": "piece space used: context canceled", "errorVerbose": "piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:55\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:60\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:17Z ERROR collector error during expired pieces collection {"Process": "storagenode", "count": 0, "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).getExpiredPaginated:103\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:66\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpiredBatchSkipV0:612\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:17Z ERROR nodestats:cache Get pricing-model/join date failed {"Process": "storagenode", "error": "context canceled"}
2024-09-24T09:30:17Z ERROR version failed to get process version info {"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tstorj.io/storj/storagenode/version.(*Chore).checkVersion:115\n\tstorj.io/storj/storagenode/version.(*Chore).RunOnce:71\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:17Z ERROR collector error during collecting pieces: {"Process": "storagenode", "error": "pieces error: pieceexpirationdb: context canceled", "errorVerbose": "pieces error: pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).getExpiredPaginated:103\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:66\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpiredBatchSkipV0:612\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:17Z ERROR gracefulexit:blobscleaner couldn't receive satellite's GE status {"Process": "storagenode", "error": "context canceled"}
2024-09-24T09:30:17Z ERROR piecestore download failed {"Process": "storagenode", "Piece ID": "C4EOZECC3VP7VAOIE2QEUXFMNYAH2RDXSEMBYXWFEQQQOD6NMDJQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 362240, "Size": 181504, "Remote Address": "172.17.0.1:46124", "error": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:146\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:64\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:666\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-09-24T09:30:17Z INFO lazyfilewalker.gc-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-09-24T09:30:17Z ERROR piecestore download failed {"Process": "storagenode", "Piece ID": "C4EOZECC3VP7VAOIE2QEUXFMNYAH2RDXSEMBYXWFEQQQOD6NMDJQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 724480, "Size": 181504, "Remote Address": "172.17.0.1:46154", "error": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "untrusted: unable to get signee: trust: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).VerifyOrderLimitSignature:146\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:64\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:666\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2024-09-24T09:30:18Z ERROR lazyfilewalker.gc-filewalker failed to start subprocess {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "context canceled"}
2024-09-24T09:30:18Z ERROR pieces lazyfilewalker failed {"Process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkSatellitePiecesToTrash:163\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:575\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:380\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:265\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:18Z ERROR filewalker failed to get progress from database {"Process": "storagenode"}
2024-09-24T09:30:18Z ERROR retain retain pieces failed {"Process": "storagenode", "cachePath": "config/retain", "error": "retain: filewalker: context canceled", "errorVerbose": "retain: filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:181\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:582\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:380\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:265\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-09-24T09:30:19Z ERROR failure during run {"Process": "storagenode", "error": "debug: listener closed", "errorVerbose": "debug: listener closed\n\tstorj.io/drpc/drpcmigrate.init:17\n\truntime.doInit1:7176\n\truntime.doInit:7143\n\truntime.main:253"}
Error: debug: listener closed
2024-09-24 09:30:19,536 WARN stopped: storagenode (exit status 1)
2024-09-24 09:30:19,537 WARN stopped: processes-exit-eventlistener (terminated by SIGTERM)