Заметил следующее. После действий.
1 остановил ноду
2 переместил все db файлы в backup
3 запустил ноду
4 создались все файлы пустые
Я так же получаю циклический перезапуск ноды с логами вида:
{"log":"2022-09-19 08:35:46,082 INFO spawned: 'storagenode' with pid 44\n","stream":"stdout","time":"2022-09-19T08:35:46.083275589Z"}
{"log":"2022-09-19 08:35:46,083 WARN received SIGQUIT indicating exit request\n","stream":"stdout","time":"2022-09-19T08:35:46.083601701Z"}
{"log":"2022-09-19 08:35:46,083 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die\n","stream":"stdout","time":"2022-09-19T08:35:46.084076985Z"}
{"log":"2022-09-19T08:35:46.083Z\u0009INFO\u0009Got a signal from the OS: \"terminated\"\u0009{\"Process\": \"storagenode-updater\"}\n","stream":"stdout","time":"2022-09-19T08:35:46.084394936Z"}
{"log":"2022-09-19 08:35:46,086 INFO stopped: storagenode-updater (exit status 0)\n","stream":"stdout","time":"2022-09-19T08:35:46.086883129Z"}
{"log":"2022-09-19T08:35:46.121Z\u0009INFO\u0009Configuration loaded\u0009{\"Process\": \"storagenode\", \"Location\": \"/app/config/config.yaml\"}\n","stream":"stdout","time":"2022-09-19T08:35:46.121491484Z"}
{"log":"2022-09-19T08:35:46.121Z\u0009INFO\u0009Anonymized tracing enabled\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:46.12181302Z"}
{"log":"2022-09-19T08:35:46.122Z\u0009INFO\u0009Operator email\u0009{\"Process\": \"storagenode\", \"Address\": \"root@istas.ru\"}\n","stream":"stdout","time":"2022-09-19T08:35:46.122972897Z"}
{"log":"2022-09-19T08:35:46.122Z\u0009INFO\u0009Operator wallet\u0009{\"Process\": \"storagenode\", \"Address\": \"\"}\n","stream":"stdout","time":"2022-09-19T08:35:46.123236137Z"}
{"log":"2022-09-19T08:35:46.826Z\u0009INFO\u0009Telemetry enabled\u0009{\"Process\": \"storagenode\", \"instance ID\": \"\"}\n","stream":"stdout","time":"2022-09-19T08:35:46.827002188Z"}
{"log":"2022-09-19T08:35:46.855Z\u0009INFO\u0009db.migration\u0009Database Version\u0009{\"Process\": \"storagenode\", \"version\": 54}\n","stream":"stdout","time":"2022-09-19T08:35:46.855366399Z"}
{"log":"2022-09-19T08:35:47.146Z\u0009INFO\u0009preflight:localtime\u0009start checking local system clock with trusted satellites' system clock.\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:47.147192012Z"}
{"log":"2022-09-19T08:35:48.080Z\u0009INFO\u0009preflight:localtime\u0009local system clock is in sync with trusted satellites' system clock.\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.081058755Z"}
{"log":"2022-09-19T08:35:48.081Z\u0009INFO\u0009Node started\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.081284897Z"}
{"log":"2022-09-19T08:35:48.081Z\u0009INFO\u0009Public server started on [::]:28967\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.081297304Z"}
{"log":"2022-09-19T08:35:48.081Z\u0009INFO\u0009Private server started on 127.0.0.1:7778\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.081400997Z"}
{"log":"2022-09-19T08:35:48.466Z\u0009WARN\u0009piecestore:monitor\u0009Disk space is less than requested. Allocated space is\u0009{\"Process\": \"storagenode\", \"bytes\": 301784891392}\n","stream":"stdout","time":"2022-09-19T08:35:48.466587408Z"}
{"log":"2022-09-19T08:35:48.466Z\u0009ERROR\u0009piecestore:monitor\u0009Total disk space is less than required minimum\u0009{\"Process\": \"storagenode\", \"bytes\": 500000000000}\n","stream":"stdout","time":"2022-09-19T08:35:48.466637828Z"}
{"log":"2022-09-19T08:35:48.466Z\u0009INFO\u0009trust\u0009Scheduling next refresh\u0009{\"Process\": \"storagenode\", \"after\": \"4h15m26.802766256s\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.466650185Z"}
{"log":"2022-09-19T08:35:48.466Z\u0009ERROR\u0009services\u0009unexpected shutdown of a runner\u0009{\"Process\": \"storagenode\", \"name\": \"piecestore:monitor\", \"error\": \"piecestore monitor: disk space requirement not met\", \"errorVerbose\": \"piecestore monitor: disk space requirement not met\\n\\tstorj.io/storj/storagenode/monitor.(*Service).Run:125\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\\n\\truntime/pprof.Do:40\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.466837724Z"}
{"log":"2022-09-19T08:35:48.466Z\u0009ERROR\u0009piecestore:cache\u0009error getting current used space: \u0009{\"Process\": \"storagenode\", \"error\": \"context canceled; context canceled; context canceled; context canceled; context canceled; context canceled\", \"errorVerbose\": \"group:\\n--- context canceled\\n--- context canceled\\n--- context canceled\\n--- context canceled\\n--- context canceled\\n--- context canceled\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467022088Z"}
{"log":"2022-09-19T08:35:48.467Z\u0009ERROR\u0009pieces:trash\u0009emptying trash failed\u0009{\"Process\": \"storagenode\", \"error\": \"pieces error: filestore error: context canceled\", \"errorVerbose\": \"pieces error: filestore error: context canceled\\n\\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\\n\\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\\n\\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\\n\\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/common/sync2.(*Cycle).Start.func1:77\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467254609Z"}
{"log":"2022-09-19T08:35:48.467Z\u0009ERROR\u0009pieces:trash\u0009emptying trash failed\u0009{\"Process\": \"storagenode\", \"error\": \"pieces error: filestore error: context canceled\", \"errorVerbose\": \"pieces error: filestore error: context canceled\\n\\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\\n\\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\\n\\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\\n\\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/common/sync2.(*Cycle).Start.func1:77\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467366724Z"}
{"log":"2022-09-19T08:35:48.467Z\u0009ERROR\u0009pieces:trash\u0009emptying trash failed\u0009{\"Process\": \"storagenode\", \"error\": \"pieces error: filestore error: context canceled\", \"errorVerbose\": \"pieces error: filestore error: context canceled\\n\\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\\n\\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\\n\\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\\n\\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/common/sync2.(*Cycle).Start.func1:77\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467540985Z"}
{"log":"2022-09-19T08:35:48.467Z\u0009ERROR\u0009pieces:trash\u0009emptying trash failed\u0009{\"Process\": \"storagenode\", \"error\": \"pieces error: filestore error: context canceled\", \"errorVerbose\": \"pieces error: filestore error: context canceled\\n\\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\\n\\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\\n\\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\\n\\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/common/sync2.(*Cycle).Start.func1:77\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467665767Z"}
{"log":"2022-09-19T08:35:48.467Z\u0009ERROR\u0009pieces:trash\u0009emptying trash failed\u0009{\"Process\": \"storagenode\", \"error\": \"pieces error: filestore error: context canceled\", \"errorVerbose\": \"pieces error: filestore error: context canceled\\n\\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\\n\\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\\n\\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\\n\\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/common/sync2.(*Cycle).Start.func1:77\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467791531Z"}
{"log":"2022-09-19T08:35:48.467Z\u0009ERROR\u0009pieces:trash\u0009emptying trash failed\u0009{\"Process\": \"storagenode\", \"error\": \"pieces error: filestore error: context canceled\", \"errorVerbose\": \"pieces error: filestore error: context canceled\\n\\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\\n\\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\\n\\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\\n\\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/common/sync2.(*Cycle).Start.func1:77\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.467988483Z"}
{"log":"2022-09-19T08:35:48.468Z\u0009ERROR\u0009gracefulexit:chore\u0009error retrieving satellites.\u0009{\"Process\": \"storagenode\", \"error\": \"satellitesdb: context canceled\", \"errorVerbose\": \"satellitesdb: context canceled\\n\\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:149\\n\\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:58\\n\\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\\n\\truntime/pprof.Do:40\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.468253448Z"}
{"log":"2022-09-19T08:35:48.468Z\u0009ERROR\u0009nodestats:cache\u0009Get pricing-model/join date failed\u0009{\"Process\": \"storagenode\", \"error\": \"context canceled\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.468396616Z"}
{"log":"2022-09-19T08:35:48.468Z\u0009INFO\u0009bandwidth\u0009Performing bandwidth usage rollups\u0009{\"Process\": \"storagenode\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.468408643Z"}
{"log":"2022-09-19T08:35:48.468Z\u0009ERROR\u0009bandwidth\u0009Could not rollup bandwidth usage\u0009{\"Process\": \"storagenode\", \"error\": \"bandwidthdb: context canceled\", \"errorVerbose\": \"bandwidthdb: context canceled\\n\\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:301\\n\\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\\n\\truntime/pprof.Do:40\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.468478819Z"}
{"log":"2022-09-19T08:35:48.468Z\u0009ERROR\u0009collector\u0009error during collecting pieces: \u0009{\"Process\": \"storagenode\", \"error\": \"pieceexpirationdb: context canceled\", \"errorVerbose\": \"pieceexpirationdb: context canceled\\n\\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\\n\\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:522\\n\\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\\n\\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\\n\\tstorj.io/common/sync2.(*Cycle).Run:99\\n\\tstorj.io/storj/storagenode/collector.(*Service).Run:53\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\\n\\truntime/pprof.Do:40\\n\\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\\n\\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.468500209Z"}
{"log":"2022-09-19T08:35:48.468Z\u0009ERROR\u0009gracefulexit:blobscleaner\u0009couldn't receive satellite's GE status\u0009{\"Process\": \"storagenode\", \"error\": \"context canceled\"}\n","stream":"stdout","time":"2022-09-19T08:35:48.468697151Z"}
{"log":"Error: piecestore monitor: disk space requirement not met\n","stream":"stdout","time":"2022-09-19T08:35:48.707299007Z"}
{"log":"2022-09-19 08:35:48,710 INFO stopped: storagenode (exit status 1)\n","stream":"stdout","time":"2022-09-19T08:35:48.710511321Z"}
{"log":"2022-09-19 08:35:48,711 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)\n","stream":"stdout","time":"2022-09-19T08:35:48.711233137Z"}
Это нормально? Не совсем понимаю что не так в этой ситуации. Данные об идентификатор ноды и кошельке я удалил из лога. Может быть после этого не создается корректный пустой db файл disk_usage?