Need help Unable to continue graceful shutdown after update graceful-exit.num-concurrent-transfers

Like the title
I have a storage node and decide to graceful shutdown it, currently it run base on docker
Base on the document, I see that I could increase the speed of gracefull shutdown by update the graceful-exit.num-concurrent-transfers in the config.yaml file
But after I do so and restart the storagenode. It cannot start up again
Before restart there is 1 satelite finish migrate data and there still 5 statelite in different region need to migrate

Here is some logs

2022-11-08T07:07:42.257Z INFO Configuration loaded {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2022-11-08T07:07:42.258Z INFO Anonymized tracing enabled {"Process": "storagenode"}
2022-11-08T07:07:42.307Z INFO Operator email {"Process": "storagenode", "Address": "123"}
2022-11-08T07:07:42.307Z INFO Operator wallet {"Process": "storagenode", "Address": "123"}
2022-11-08T07:07:43.603Z INFO Telemetry enabled {"Process": "storagenode", "instance ID": "123"}
2022-11-08T07:07:43.750Z INFO db.migration Database Version {"Process": "storagenode", "version": 54}
2022-11-08T07:07:44.315Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock. {"Process": "storagenode"}
2022-11-08T07:07:45.263Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock. {"Process": "storagenode"}
2022-11-08 07:07:45,264 INFO waiting for storagenode, processes-exit-eventlistener to die
2022-11-08T07:07:45.264Z INFO Node 123 started {"Process": "storagenode"}
2022-11-08T07:07:45.264Z INFO Public server started on [::]:28967 {"Process": "storagenode"}
2022-11-08T07:07:45.264Z INFO Private server started on 127.0.0.1:7778 {"Process": "storagenode"}
2022-11-08T07:07:46.470Z INFO bandwidth Performing bandwidth usage rollups {"Process": "storagenode"}
2022-11-08T07:07:46.470Z INFO trust Scheduling next refresh {"Process": "storagenode", "after": "8h14m27.061152059s"}
2022-11-08T07:07:46.471Z ERROR piecestore:cache error getting current used space: {"Process": "storagenode", "error": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32", "errorVerbose": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:663\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.471Z ERROR services unexpected shutdown of a runner {"Process": "storagenode", "name": "piecestore:cache", "error": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32", "errorVerbose": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:663\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.471Z ERROR gracefulexit:chore error retrieving satellites. {"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:152\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:164\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:58\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.471Z ERROR nodestats:cache Get pricing-model/join date failed {"Process": "storagenode", "error": "context canceled"}
2022-11-08T07:07:46.472Z ERROR collector error during collecting pieces: {"Process": "storagenode", "error": "context canceled"}
2022-11-08T07:07:46.473Z ERROR gracefulexit:blobscleaner couldn't receive satellite's GE status {"Process": "storagenode", "error": "context canceled"}
2022-11-08T07:07:46.475Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.475Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.475Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.476Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.476Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.477Z ERROR pieces:trash emptying trash failed {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-11-08T07:07:46.514Z ERROR bandwidth Could not rollup bandwidth usage {"Process": "storagenode", "error": "sql: transaction has already been committed or rolled back"}
Error: pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32
2022-11-08 07:07:47,237 INFO stopped: storagenode (exit status 1)
2022-11-08 07:07:47,238 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

After I manually remove the trash folder

08/11/2022 19:08:18
2022-11-08T12:08:18.200Z	ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32", "errorVerbose": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:663\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
08/11/2022 19:08:18
2022-11-08T12:08:18.200Z	ERROR	services	unexpected shutdown of a runner	{"Process": "storagenode", "name": "piecestore:cache", "error": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32", "errorVerbose": "pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:663\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	gracefulexit:blobscleaner	couldn't receive satellite's GE status	{"Process": "storagenode", "error": "context canceled"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	collector	error during collecting pieces: 	{"Process": "storagenode", "error": "context canceled"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"Process": "storagenode", "error": "context canceled"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189"}
08/11/2022 19:08:18
2022-11-08T12:08:18.202Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
08/11/2022 19:08:18
2022-11-08T12:08:18.204Z	ERROR	gracefulexit:chore	error retrieving satellites.	{"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:152\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:164\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:58\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
08/11/2022 19:08:18
2022-11-08T12:08:18.242Z	ERROR	bandwidth	Could not rollup bandwidth usage	{"Process": "storagenode", "error": "sql: transaction has already been committed or rolled back"}
08/11/2022 19:08:18
Error: pieces error: failed to enumerate satellites: node ID: not enough bytes to make a node id; have 3, need 32
08/11/2022 19:08:18
2022-11-08 12:08:18,752 INFO stopped: storagenode (exit status 1)
08/11/2022 19:08:18
2022-11-08 12:08:18,752 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

This seems to be your issue. The only other time it’s mentioned in the forum the hard disk had errors which needed fixing:

There still 1 node running fine with same harddisk
I’m suppose it happen because there is 1 sateline finished migrate data so when I restart the nodeid is incorrect

Just remove the node, I think next time we should not update any setting while process with graceful shutdown

please never do so, you risking the node to be disqualified!
And as @Stob suggested - you need to check your disk for errors and fix them, you likely have a corrupted data.
By the way, running more than a one node on the same disk is against the Node Operator Terms & Conditions.