Error: Error during preflight check for storagenode databases: preflight: database "notifications": expected schema does not match actual:

Hello, I had a powercut and some of the databases became corrupt. I successfully managed to resolve the bandwidth.db but doing the same process for notifications.db, Secret.db, heldamount.db & pricing.db has resulted in the following errors. these are the errors I see in the log:

  1. Error: Error during preflight check for storagenode databases: preflight: database “notifications”: expected schema does not match actual: &dbschema.Schema{

  2. Error: Error during preflight check for storagenode databases: preflight: database “heldamount”: expected schema does not match actual: &dbschema.Schema{

  3. Error: Error during preflight check for storagenode databases: preflight: database “secret”: expected schema does not match actual: &dbschema.Schema{

  4. Error: Error during preflight check for storagenode databases: preflight: database “pricing”: expected schema does not match actual: &dbschema.Schema{

Any help would be appreciated.
Thanks

You can recreate all corrupted databases with this guide: How to fix database: file is not a database error – Storj , but you would lose the historic data.

Hey Alexey, thank you for the article, I followed it through and think i\m in a better place than before but still getting errors. Here is what the logs is saying now… Does it mean I need to start fresh? :frowning:

2021-09-29T07:32:02.223Z INFO Configuration loaded {“Location”: “/app/config/config.yaml”}
2021-09-29T07:32:02.229Z INFO Operator email {“Address”: “**************”}
2021-09-29T07:32:02.230Z INFO Operator wallet {“Address”: “*************”}
2021-09-29T07:32:09.032Z INFO Telemetry enabled {“instance ID”: “19KAa7Ss84z9jhq2hcX8LGKYc8MUQomNFmiCBQBG1FvmMa9zxS”}
2021-09-29T07:32:09.113Z INFO db.migration Database Version {“version”: 53}
2021-09-29T07:32:37.121Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-09-29T07:32:38.016Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-09-29T07:32:38.017Z INFO bandwidth Performing bandwidth usage rollups

2021-09-29T07:32:38.017Z WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 193092812800}
2021-09-29T07:32:38.017Z ERROR piecestore:monitor Total disk space is less than required minimum {“bytes”: 500000000000}

2021-09-29T07:32:38.017Z INFO Node 19KAa7Ss84z9jhq2hcX8LGKYc8MUQomNFmiCBQBG1FvmMa9zxS started
2021-09-29T07:32:38.017Z INFO Public server started on [::]:28967
2021-09-29T07:32:38.017Z INFO Private server started on 127.0.0.1:7778

2021-09-29T07:32:38.017Z ERROR services unexpected shutdown of a runner {“name”: “piecestore:monitor”, “error”: “piecestore monitor: disk space requirement not met”, “errorVerbose”: “piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:123\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.018Z ERROR nodestats:cache Get pricing-model/join date failed {“error”: “context canceled”}

2021-09-29T07:32:38.019Z INFO trust Scheduling next refresh {“after”: “5h21m47.431829094s”}

2021-09-29T07:32:38.019Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.019Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR gracefulexit:chore error retrieving satellites. {“error”: “satellitesdb: context canceled”, “errorVerbose”: “satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:149\n\tstorj.io/storj/storagenode/gracefulexit.(*service).ListPendingExits:89\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run.func1:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:50\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR gracefulexit:blobscleaner couldn’t receive satellite’s GE status {“error”: “context canceled”}

2021-09-29T07:32:38.020Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.021Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.021Z ERROR collector error during collecting pieces: {“error”: “pieceexpirationdb: context canceled”, “errorVerbose”: “pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:521\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.021Z ERROR piecestore:cache error getting current used space: {“error”: “context canceled; context canceled; context canceled; context canceled; context canceled; context canceled”, “errorVerbose”: “group:\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled”}

2021-09-29T07:32:38.456Z ERROR bandwidth Could not rollup bandwidth usage {“error”: “sql: transaction has already been committed or rolled back”}

2021-09-29T07:32:55.520Z INFO Got a signal from the OS: “terminated”

Error: piecestore monitor: disk space requirement not met

If the worse has happened and I need to start again, is it truly from the beginning or can I pick off somewhat where I left off?
Thanks in advance

No, you need to temporarily disable free space monitoring until the filewalker updates the databases with actual usage.
You need to set storage2.monitor.minimum-disk-space: 0B in the config.yaml or use it as an option in your docker run, i.e.

docker run -d ... storjlabs/storagenode:latest --storage2.monitor.minimum-disk-space=0B

The node will start and will update databases with current usage, your dashboard will show a wrong information for a while, until it recover the usage. If you deleted the bandwidth.db, then your previous bandwidth usage stat would be lost. However, it will not affect information on the satellites.

Thank you Alexey, your support has been fantastic.

Unfortunately life got in the way so I’ve only just been able to carry out your recommended fixes. This has got my node back online but now my online status is below 80% for all regions. Is this something I need to worry about? I read one of your articles suggesting it can only be down for a few hours each month.

You only get suspended if you drop below 60% at the moment. It’s still best to keep downtime low because you will miss out on egress and lose data to repair while offline. But there shouldn’t be any permanent consequences. Just keep it online from now on and you’ll be fine.