Like the title
I have a storage node and decide to graceful shutdown it, currently it run base on docker
Base on the document, I see that I could increase the speed of gracefull shutdown by update the graceful-exit.num-concurrent-transfers in the config.yaml file
But after I do so and restart the storagenode. It cannot start up again
Before restart there is 1 satelite finish migrate data and there still 5 statelite in different region need to migrate
Here is some logs
2022-11-08T07:07:42.257Z INFO Configuration loaded {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2022-11-08T07:07:42.258Z INFO Anonymized tracing enabled {"Process": "storagenode"}
2022-11-08T07:07:42.307Z INFO Operator email {"Process": "storagenode", "Address": "123"}
2022-11-08T07:07:42.307Z INFO Operator wallet {"Process": "storagenode", "Address": "123"}
2022-11-08T07:07:43.603Z INFO Telemetry enabled {"Process": "storagenode", "instance ID": "123"}
2022-11-08T07:07:43.750Z INFO db.migration Database Version {"Process": "storagenode", "version": 54}
2022-11-08T07:07:44.315Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock. {"Process": "storagenode"}
2022-11-08T07:07:45.263Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock. {"Process": "storagenode"}
2022-11-08 07:07:45,264 INFO waiting for storagenode, processes-exit-eventlistener to die
2022-11-08T07:07:45.264Z INFO Node 123 started {"Process": "storagenode"}
2022-11-08T07:07:45.264Z INFO Public server started on [::]:28967 {"Process": "storagenode"}
2022-11-08T07:07:45.264Z INFO Private server started on 127.0.0.1:7778 {"Process": "storagenode"}
2022-11-08T07:07:46.470Z INFO bandwidth Performing bandwidth usage rollups {"Process": "storagenode"}
2022-11-08T07:07:46.470Z INFO trust Scheduling next refresh {"Process": "storagenode", "after": "8h14m27.061152059s"}
2022-11-08T07:07:46.471Z ERROR piecestore:cache error getting current used space: {"Process": "storagenode", "error": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32", "errorVerbose": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:663\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.471Z ERROR services unexpected shutdown of a runner {"Process": "storagenode", "name": "piecestore:cache", "error": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32", "errorVerbose": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:663\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.471Z ERROR gracefulexit:chore error retrieving satellites. {"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:152\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:164\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:58\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.471Z ERROR nodestats:cache Get pricing-model/join date failed {"Process": "storagenode", "error": "context canceled"}
2022-11-08T07:07:46.472Z ERROR collector error during collecting pieces: {"Process": "storagenode", "error": "context canceled"}
2022-11-08T07:07:46.473Z ERROR gracefulexit:blobscleaner couldn't receive satellite's GE status {"Process": "storagenode", "error": "context canceled"}
2022-11-08T07:07:46.475Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.475Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.475Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.476Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.476Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.477Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.514Z ERROR bandwidth Could not rollup bandwidth usage {"Process": "storagenode", "error": "sql: transaction has already been committed or rolled back"}
Error: pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32
2022-11-08 07:07:47,237 INFO stopped: storagenode (exit status 1)
2022-11-08 07:07:47,238 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)