Docker Storj Node Repair DB

Problem with storj node, repair db, i repair my db, the reputations.db and piece_spaced_used with this: https://support.storj.io/hc/en-us/articles/360029309111-How-to-fix-a-database-disk-image-is-malformed-

NOTE: It not for disk space i have 1TB and 850MB for storj

but i have problem/erros and i check all my dbs and all “ok”:

 sqlite3 /storagenode3/storage/piece_spaced_used.db "PRAGMA integrity_check;"
ok

2022-09-06 10:08:46,376 INFO spawned: 'storagenode' with pid 12
2022-09-06 10:08:46,379 INFO spawned: 'storagenode-updater' with pid 13
2022-09-06T10:08:46.392Z        INFO    Configuration loaded    {"Process": "storagenode-updater", "Location": "/app/config/config.yaml"}
2022-09-06T10:08:46.392Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "operator.wallet-features"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "console.address"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "server.address"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "storage.allocated-bandwidth"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "storage.allocated-disk-space"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "operator.email"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "operator.wallet"}
2022-09-06T10:08:46.393Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "server.private-address"}
2022-09-06T10:08:46.394Z        INFO    Invalid configuration file key  {"Process": "storagenode-updater", "Key": "contact.external-address"}
2022-09-06T10:08:46.394Z        INFO    Anonymized tracing enabled      {"Process": "storagenode-updater"}
2022-09-06T10:08:46.403Z        INFO    Running on version      {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.62.3"}
2022-09-06T10:08:46.403Z        INFO    Downloading versions.   {"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2022-09-06T10:08:46.410Z        INFO    Configuration loaded    {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2022-09-06T10:08:46.410Z        INFO    Anonymized tracing enabled      {"Process": "storagenode"}
2022-09-06T10:08:46.418Z        INFO    Operator email  {"Process": "storagenode", "Address": "robbyqhd@gmail.com"}
2022-09-06T10:08:46.418Z        INFO    Operator wallet {"Process": "storagenode", "Address": "0x7106dee8176fa63efbdf5c5946ab1452f90c06c7"}
2022-09-06T10:08:46.860Z        INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.62.3"}
2022-09-06T10:08:46.860Z        INFO    Version is up to date   {"Process": "storagenode-updater", "Service": "storagenode"}
2022-09-06T10:08:46.879Z        INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.62.3"}
2022-09-06T10:08:46.879Z        INFO    Version is up to date   {"Process": "storagenode-updater", "Service": "storagenode-updater"}
2022-09-06T10:08:47.040Z        INFO    Telemetry enabled       {"Process": "storagenode", "instance ID": "12d7ET9MCZhLbzh8rty5riwL8wS5Lhw8Ria4qFYqkZ4UenpoLK9"}
2022-09-06T10:08:47.058Z        INFO    db.migration    Database Version        {"Process": "storagenode", "version": 54}
2022-09-06 10:08:48,058 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-09-06 10:08:48,058 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-09-06 10:08:48,059 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-09-06T10:08:48.408Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.        {"Process": "storagenode"}
2022-09-06T10:08:49.188Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.    {"Process": "storagenode"}
2022-09-06T10:08:49.188Z        INFO    Node 12d7ET9MCZhLbzh8rty5riwL8wS5Lhw8Ria4qFYqkZ4UenpoLK9 started        {"Process": "storagenode"}
2022-09-06T10:08:49.188Z        INFO    Public server started on [::]:28967     {"Process": "storagenode"}
2022-09-06T10:08:49.188Z        INFO    Private server started on 127.0.0.1:7778        {"Process": "storagenode"}
2022-09-06T10:08:49.188Z        INFO    failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.     {"Process": "storagenode"}
2022-09-06T10:08:49.508Z        INFO    trust   Scheduling next refresh {"Process": "storagenode", "after": "8h41m53.880329547s"}
2022-09-06T10:08:49.509Z        WARN    piecestore:monitor      Disk space is less than requested. Allocated space is   {"Process": "storagenode", "bytes": 428050849792}
2022-09-06T10:08:49.509Z        ERROR   piecestore:monitor      Total disk space is less than required minimum  {"Process": "storagenode", "bytes": 500000000000}
2022-09-06T10:08:49.509Z        ERROR   services        unexpected shutdown of a runner {"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: disk space requirement not met", "errorVerbose": "piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:125\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.509Z        ERROR   nodestats:cache Get pricing-model/join date failed      {"Process": "storagenode", "error": "context canceled"}
2022-09-06T10:08:49.510Z        ERROR   gracefulexit:chore      error retrieving satellites.    {"Process": "storagenode", "error": "satellitesdb: context canceled", "errorVerbose": "satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:149\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:58\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   collector       error during collecting pieces:         {"Process": "storagenode", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:521\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        INFO    bandwidth       Performing bandwidth usage rollups      {"Process": "storagenode"}
2022-09-06T10:08:49.510Z        ERROR   bandwidth       Could not rollup bandwidth usage        {"Process": "storagenode", "error": "bandwidthdb: context canceled", "errorVerbose": "bandwidthdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:301\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   pieces:trash    emptying trash failed   {"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2022-09-06T10:08:49.510Z        ERROR   gracefulexit:blobscleaner       couldn't receive satellite's GE status  {"Process": "storagenode", "error": "context canceled"}
2022-09-06T10:08:49.511Z        ERROR   piecestore:cache        error getting current used space:       {"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
Error: piecestore monitor: disk space requirement not met
2022-09-06 10:08:52,992 INFO exited: storagenode (exit status 1; not expected)
2022-09-06 10:08:53,996 INFO spawned: 'storagenode' with pid 47
2022-09-06 10:08:53,996 WARN received SIGQUIT indicating exit request
2022-09-06 10:08:53,997 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2022-09-06T10:08:53.997Z        INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2022-09-06 10:08:54,000 INFO stopped: storagenode-updater (exit status 0)
2022-09-06 10:08:54,000 INFO stopped: storagenode (terminated by SIGTERM)

and i have this…
us2.storj.io:7777

Suspension

0 %

Audit

0 %

Online

0 %

saltlake.tardigrade.io:7777

Suspension

0 %

Audit

0 %

Online

0 %

ap1.storj.io:7777

Suspension

0 %

Audit

0 %

Online

0 %

us1.storj.io:7777

Suspension

0 %

Audit

0 %

Online

0 %

eu1.storj.io:7777

Suspension

0 %

Audit

0 %

Online

0 %

europe-north-1.tardigrade.io:7777

Suspension

0 %

Audit

0 %

Online

0 %

Your node is failed to start because

So, your piece_spaced_used.db still have issues. You need to start the node to allow the filewalker to update statistics and fill this database with correct values of used space.
Please temporary add the option --storage2.monitor.minimum-disk-space: 400GB to your docker run command after the image name.
When used space will be recognized, you may stop and remove the container and run it back, but without this temporary option.

1 Like

thanks Alexey! node work i stay with ok and before i stop remove and i delete the --storage2.monitor.minimum-disk-space: 400GB