Error: Error during preflight check for storagenode databases: preflight: database "notifications": expected schema does not match actual:

Hello, I had a powercut and some of the databases became corrupt. I successfully managed to resolve the bandwidth.db but doing the same process for notifications.db, Secret.db, heldamount.db & pricing.db has resulted in the following errors. these are the errors I see in the log:

  1. Error: Error during preflight check for storagenode databases: preflight: database “notifications”: expected schema does not match actual: &dbschema.Schema{

  2. Error: Error during preflight check for storagenode databases: preflight: database “heldamount”: expected schema does not match actual: &dbschema.Schema{

  3. Error: Error during preflight check for storagenode databases: preflight: database “secret”: expected schema does not match actual: &dbschema.Schema{

  4. Error: Error during preflight check for storagenode databases: preflight: database “pricing”: expected schema does not match actual: &dbschema.Schema{

Any help would be appreciated.
Thanks

You can recreate all corrupted databases with this guide: How to fix database: file is not a database error – Storj , but you would lose the historic data.

Hey Alexey, thank you for the article, I followed it through and think i\m in a better place than before but still getting errors. Here is what the logs is saying now… Does it mean I need to start fresh? :frowning:

2021-09-29T07:32:02.223Z INFO Configuration loaded {“Location”: “/app/config/config.yaml”}
2021-09-29T07:32:02.229Z INFO Operator email {“Address”: “**************”}
2021-09-29T07:32:02.230Z INFO Operator wallet {“Address”: “*************”}
2021-09-29T07:32:09.032Z INFO Telemetry enabled {“instance ID”: “19KAa7Ss84z9jhq2hcX8LGKYc8MUQomNFmiCBQBG1FvmMa9zxS”}
2021-09-29T07:32:09.113Z INFO db.migration Database Version {“version”: 53}
2021-09-29T07:32:37.121Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-09-29T07:32:38.016Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-09-29T07:32:38.017Z INFO bandwidth Performing bandwidth usage rollups

2021-09-29T07:32:38.017Z WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 193092812800}
2021-09-29T07:32:38.017Z ERROR piecestore:monitor Total disk space is less than required minimum {“bytes”: 500000000000}

2021-09-29T07:32:38.017Z INFO Node 19KAa7Ss84z9jhq2hcX8LGKYc8MUQomNFmiCBQBG1FvmMa9zxS started
2021-09-29T07:32:38.017Z INFO Public server started on [::]:28967
2021-09-29T07:32:38.017Z INFO Private server started on 127.0.0.1:7778

2021-09-29T07:32:38.017Z ERROR services unexpected shutdown of a runner {“name”: “piecestore:monitor”, “error”: “piecestore monitor: disk space requirement not met”, “errorVerbose”: “piecestore monitor: disk space requirement not met\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:123\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.018Z ERROR nodestats:cache Get pricing-model/join date failed {“error”: “context canceled”}

2021-09-29T07:32:38.019Z INFO trust Scheduling next refresh {“after”: “5h21m47.431829094s”}

2021-09-29T07:32:38.019Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.019Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR gracefulexit:chore error retrieving satellites. {“error”: “satellitesdb: context canceled”, “errorVerbose”: “satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:149\n\tstorj.io/storj/storagenode/gracefulexit.(*service).ListPendingExits:89\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run.func1:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:50\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR gracefulexit:blobscleaner couldn’t receive satellite’s GE status {“error”: “context canceled”}

2021-09-29T07:32:38.020Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.020Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.021Z ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:154\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:310\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:367\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.021Z ERROR collector error during collecting pieces: {“error”: “pieceexpirationdb: context canceled”, “errorVerbose”: “pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:521\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

2021-09-29T07:32:38.021Z ERROR piecestore:cache error getting current used space: {“error”: “context canceled; context canceled; context canceled; context canceled; context canceled; context canceled”, “errorVerbose”: “group:\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled”}

2021-09-29T07:32:38.456Z ERROR bandwidth Could not rollup bandwidth usage {“error”: “sql: transaction has already been committed or rolled back”}

2021-09-29T07:32:55.520Z INFO Got a signal from the OS: “terminated”

Error: piecestore monitor: disk space requirement not met

If the worse has happened and I need to start again, is it truly from the beginning or can I pick off somewhat where I left off?
Thanks in advance

No, you need to temporarily disable free space monitoring until the filewalker updates the databases with actual usage.
You need to set storage2.monitor.minimum-disk-space: 0B in the config.yaml or use it as an option in your docker run, i.e.

docker run -d ... storjlabs/storagenode:latest --storage2.monitor.minimum-disk-space=0B

The node will start and will update databases with current usage, your dashboard will show a wrong information for a while, until it recover the usage. If you deleted the bandwidth.db, then your previous bandwidth usage stat would be lost. However, it will not affect information on the satellites.

Thank you Alexey, your support has been fantastic.

Unfortunately life got in the way so I’ve only just been able to carry out your recommended fixes. This has got my node back online but now my online status is below 80% for all regions. Is this something I need to worry about? I read one of your articles suggesting it can only be down for a few hours each month.

You only get suspended if you drop below 60% at the moment. It’s still best to keep downtime low because you will miss out on egress and lose data to repair while offline. But there shouldn’t be any permanent consequences. Just keep it online from now on and you’ll be fine.

Hi Alexey, so I was getting this error

docker logs storagenode 2>&1 | grep -i "error" | tail
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{

You suggested that I need to recreate my database. After step no. 5, I get this

pi@pi-sam:/mnt/mydisk_seagate/storage/storage $ find -maxdepth 1 -iname "*.db" -print0 -exec sqlite3 '{}' 'PRAGMA integrity_check;' ';'
./bandwidth.dbok
./heldamount.dbok
./info.dbok
./notifications.dbok
./orders.dbok
./pieceinfo.dbok
./piece_expiration.dbok
./pricing.dbok
./reputation.dbok
./satellites.dbok
./secret.dbok
./storage_usage.dbok
./used_serial.dbok
./piece_spaced_used.dbok

In step 6, you suggest that if there are no errors then we can start the node again, which I did. But I still get the same preflight check error.

Your database bandwidth.db partially rolled back and thus not consistent even reported as healthy, so there are two alternatives:

  1. Drop or create some reported columns, or drop/insert row with version of database scheme (depends on the error, which you did not post in full)
  2. Re-create this database: https://support.storj.io/hc/en-us/articles/4403032417044-How-to-fix-database-file-is-not-a-database-error

In both cases you likely lose the bandwidth Stat, but in the first case it could be saved (not guaranteed), but it’s longer and more complicated.

I am still facing this issue,

pi@pi-sam:~ $ docker logs storagenode 2>&1 | grep -i "error" | tail
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
2023-04-13T18:37:41.192Z	ERROR	Error updating service.	{"Process": "storagenode-updater", "Service": "storagenode", "error": "context canceled", "errorVerbose": "context canceled\n\tmain.downloadBinary:61\n\tmain.update:39\n\tmain.loopFunc:27\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tmain.cmdRun:136\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tmain.main:20\n\truntime.main:250"}
2023-04-13T18:37:41.230Z	ERROR	Error updating service.	{"Process": "storagenode-updater", "Service": "storagenode-updater", "error": "Get \"https://github.com/storj/storj/releases/download/v1.76.2/storagenode-updater_linux_arm.zip\": context canceled", "errorVerbose": "Get \"https://github.com/storj/storj/releases/download/v1.76.2/storagenode-updater_linux_arm.zip\": context canceled\n\tmain.downloadBinary:58\n\tmain.update:39\n\tmain.loopFunc:32\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tmain.cmdRun:136\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tmain.main:20\n\truntime.main:250"}
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
2023-04-13T18:37:51.392Z	ERROR	Error updating service.	{"Process": "storagenode-updater", "Service": "storagenode-updater", "error": "context canceled", "errorVerbose": "context canceled\n\tmain.downloadBinary:58\n\tmain.update:39\n\tmain.loopFunc:32\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tmain.cmdRun:136\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tmain.main:20\n\truntime.main:250"}
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{

FOR FATAL ERROR

pi@pi-sam:~ $ docker logs storagenode 2>&1 | grep -i "FATAL ERROR" | tail
pi@pi-sam:~ $ docker logs --tail 20 storagenode
2023-04-13T18:43:18.368Z	INFO	Invalid configuration file value for key	{"Process": "storagenode-updater", "Key": "log.development"}
2023-04-13T18:43:18.368Z	INFO	Invalid configuration file value for key	{"Process": "storagenode-updater", "Key": "log.level"}
2023-04-13T18:43:18.370Z	INFO	Anonymized tracing enabled	{"Process": "storagenode-updater"}
2023-04-13T18:43:18.407Z	INFO	Running on version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.76.2"}
2023-04-13T18:43:18.408Z	INFO	Downloading versions.	{"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2023-04-13T18:43:18.490Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2023-04-13T18:43:18.491Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2023-04-13T18:43:18.513Z	INFO	Operator email	{"Process": "storagenode", "Address": "MYEMAIL@gmail.com"}
2023-04-13T18:43:18.514Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "MYADDRESS"}
2023-04-13T18:43:19.364Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.76.2"}
2023-04-13T18:43:19.364Z	INFO	Version is up to date	{"Process": "storagenode-updater", "Service": "storagenode"}
2023-04-13 18:43:19,365 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-13 18:43:19,366 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-13 18:43:19,366 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-13T18:43:19.403Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.76.2"}
2023-04-13T18:43:19.403Z	INFO	Version is up to date	{"Process": "storagenode-updater", "Service": "storagenode-updater"}
2023-04-13T18:43:19.468Z	INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
2023-04-13T18:43:19.468Z	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
2023-04-13T18:43:20.604Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "1jPshe5yQkrm2pZSapcS6M4tsHMv16hCmXdMsXWrbeqjah319T"}
2023-04-13T18:43:20.605Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "1jPshe5yQkrm2pZSapcS6M4tsHMv16hCmXdMsXWrbeqjah319T"}

now it’s bandwidth.db. You need to re-create it too.
I suppose you could have other databases corrupted as well.