Error during preflight check for storagenode databases

Dear stroj team.
Since today my node won´t come up again.

storagenode_1    | 2020-02-08T13:12:28.852Z     INFO    version running on version v0.31.12
storagenode_1    | 2020-02-08T13:12:29.705Z     INFO    db.migration    Database Version        {"version": 31}
storagenode_1    | Error: Error during preflight check for storagenode databases: storage node preflight database error: used_serial: expected schema does not match actual:   &dbschema.Schema{
storagenode_1    |      Tables: []*dbschema.Table{
storagenode_1    | +            &{
storagenode_1    | +                    Name: "test_table",
storagenode_1    | +                    Columns: []*dbschema.Column{
storagenode_1    | +                            &{Name: "id", Type: "int"},
storagenode_1    | +                            &{Name: "name", Type: "varchar(30)", IsNullable: true},
storagenode_1    | +                    },
storagenode_1    | +                    PrimaryKey: []string{"id"},
storagenode_1    | +            },
storagenode_1    |              &{Name: "used_serial_", Columns: []*dbschema.Column{&{Name: "expiration", Type: "TIMESTAMP"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}}},
storagenode_1    |      },
storagenode_1    |      Indexes: []*dbschema.Index{&{Name: "idx_used_serial_", Table: "used_serial_", Columns: []string{"expiration"}}, &{Name: "pk_used_serial_", Table: "used_serial_", Columns: []string{"satellite_id", "serial_number"}}},
storagenode_1    |   }
storagenode_1    |
storagenode_1    |      storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
storagenode_1    |      main.cmdRun:196
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1.2:299
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1:317
storagenode_1    |      github.com/spf13/cobra.(*Command).execute:826
storagenode_1    |      github.com/spf13/cobra.(*Command).ExecuteC:914
storagenode_1    |      github.com/spf13/cobra.(*Command).Execute:864
storagenode_1    |      storj.io/storj/pkg/process.ExecWithCustomConfig:79
storagenode_1    |      storj.io/storj/pkg/process.Exec:61
storagenode_1    |      main.main:326
storagenode_1    |      runtime.main:203
storagenode_1 exited with code 1

Tried also the steps described in Bandwidth Error After Upgrade to 31.9, but than i got the message that indexes already exists. So this seems not to be the problem.

Any ideas how i can fix it?

best regards
Michael

Please, make a backup of the used_serial.db and remove it (the simple way to do both - just rename it to the used_serial.db.bak) and restart the storagenode.
Then check your logs. If all fine - then let it run.

Thanks for your answer Alexey.
I moved the used_serial.db to used_serial.db.bak and restarted. now i get the same error like in the other post with:

Error: Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual:   &dbschema.Schema{
        Tables: []*dbschema.Table{
                &{Name: "order_archive_", Columns: []*dbschema.Column{&{Name: "archived_at", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "status", Type: "INTEGER"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
+               &{
+                       Name: "test_table",
+                       Columns: []*dbschema.Column{
+                               &{Name: "id", Type: "int"},
+                               &{Name: "name", Type: "varchar(30)", IsNullable: true},
+                       },
+                       PrimaryKey: []string{"id"},
+               },
                &{Name: "unsent_order", Columns: []*dbschema.Column{&{Name: "order_limit_expiration", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
        },
        Indexes: []*dbschema.Index{&{Name: "idx_order_archived_at", Table: "order_archive_", Columns: []string{"archived_at"}}, &{Name: "idx_orders", Table: "unsent_order", Columns: []string{"satellite_id", "serial_number"}}},
  }

        storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
        main.cmdRun:196
        storj.io/storj/pkg/process.cleanup.func1.2:299
        storj.io/storj/pkg/process.cleanup.func1:317
        github.com/spf13/cobra.(*Command).execute:826
        github.com/spf13/cobra.(*Command).ExecuteC:914
        github.com/spf13/cobra.(*Command).Execute:864
        storj.io/storj/pkg/process.ExecWithCustomConfig:79
        storj.io/storj/pkg/process.Exec:61
        main.main:326
        runtime.main:203

I also moved the orders.db to orders.db.bak and restarted. Now the error is:

Error: Error creating tables for master database on storagenode: migrate: no such table: main.order_archive_
        storj.io/storj/private/migrate.(*Migration).Run:182
        storj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:291
        main.cmdRun:186
        storj.io/storj/pkg/process.cleanup.func1.2:299
        storj.io/storj/pkg/process.cleanup.func1:317
        github.com/spf13/cobra.(*Command).execute:826
        github.com/spf13/cobra.(*Command).ExecuteC:914
        github.com/spf13/cobra.(*Command).Execute:864
        storj.io/storj/pkg/process.ExecWithCustomConfig:79
        storj.io/storj/pkg/process.Exec:61
        main.main:326
        runtime.main:203

best regards
Michael

For orders.db the removing of database doesn’t work.
Please, try this: Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual

OK i did the following now:

stop storagenode

rm bandwidth.db
rm storage_usage.db
rm used_serial.db
rm orders.db

start storage node and let him create all the tables

storagenode_1    | Error: Error creating tables for master database on storagenode: migrate: no such table: main.order_archive_
storagenode_1    |      storj.io/storj/private/migrate.(*Migration).Run:182
storagenode_1    |      storj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:291
storagenode_1    |      main.cmdRun:186
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1.2:299
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1:317
storagenode_1    |      github.com/spf13/cobra.(*Command).execute:826
storagenode_1    |      github.com/spf13/cobra.(*Command).ExecuteC:914
storagenode_1    |      github.com/spf13/cobra.(*Command).Execute:864
storagenode_1    |      storj.io/storj/pkg/process.ExecWithCustomConfig:79
storagenode_1    |      storj.io/storj/pkg/process.Exec:61
storagenode_1    |      main.main:326
storagenode_1    |      runtime.main:203
storagenode_1 exited with code 1

stop storage node

copy back orders.db.bak to orders.db

Executing the sqlite commands:

sqlite3 orders.db
drop index idx_orders;
CREATE INDEX idx_orders ON unsent_order(satellite_id, serial_number);
.exit

starting the storagenode and now i get again:

storagenode_1    | 2020-02-11T09:01:12.562Z     INFO    version running on version v0.31.12
storagenode_1    | 2020-02-11T09:01:13.408Z     INFO    db.migration    Database Version        {"version": 31}
storagenode_1    | Error: Error during preflight check for storagenode databases: storage node preflight database error: used_serial: expected schema does not match actual:   &dbschema.Schema{
storagenode_1    | -    Tables: []*dbschema.Table{
storagenode_1    | -            &{
storagenode_1    | -                    Name: "used_serial_",
storagenode_1    | -                    Columns: []*dbschema.Column{
storagenode_1    | -                            &{Name: "expiration", Type: "TIMESTAMP"},
storagenode_1    | -                            &{Name: "satellite_id", Type: "BLOB"},
storagenode_1    | -                            &{Name: "serial_number", Type: "BLOB"},
storagenode_1    | -                    },
storagenode_1    | -            },
storagenode_1    | -    },
storagenode_1    | +    Tables: nil,
storagenode_1    | -    Indexes: []*dbschema.Index{
storagenode_1    | -            &{Name: "idx_used_serial_", Table: "used_serial_", Columns: []string{"expiration"}},
storagenode_1    | -            &{
storagenode_1    | -                    Name:    "pk_used_serial_",
storagenode_1    | -                    Table:   "used_serial_",
storagenode_1    | -                    Columns: []string{"satellite_id", "serial_number"},
storagenode_1    | -            },
storagenode_1    | -    },
storagenode_1    | +    Indexes: nil,
storagenode_1    |   }
storagenode_1    |
storagenode_1    |      storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
storagenode_1    |      main.cmdRun:196
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1.2:299
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1:317
storagenode_1    |      github.com/spf13/cobra.(*Command).execute:826
storagenode_1    |      github.com/spf13/cobra.(*Command).ExecuteC:914
storagenode_1    |      github.com/spf13/cobra.(*Command).Execute:864
storagenode_1    |      storj.io/storj/pkg/process.ExecWithCustomConfig:79
storagenode_1    |      storj.io/storj/pkg/process.Exec:61
storagenode_1    |      main.main:326
storagenode_1    |      runtime.main:203
storagenode_1 exited with code 1

How can expected schema does not match when the database is recreated on first start when it´s not present?

So i backed up all databases and removed them and did the same procedure like before stop, restore orders.db, execute SQL commands, start storagenode.

The result is:

Error: Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual:   &dbschema.Schema{
        Tables: []*dbschema.Table{
                &{Name: "order_archive_", Columns: []*dbschema.Column{&{Name: "archived_at", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "status", Type: "INTEGER"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
+               &{
+                       Name: "test_table",
+                       Columns: []*dbschema.Column{
+                               &{Name: "id", Type: "int"},
+                               &{Name: "name", Type: "varchar(30)", IsNullable: true},
+                       },
+                       PrimaryKey: []string{"id"},
+               },
                &{Name: "unsent_order", Columns: []*dbschema.Column{&{Name: "order_limit_expiration", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
        },
        Indexes: []*dbschema.Index{&{Name: "idx_order_archived_at", Table: "order_archive_", Columns: []string{"archived_at"}}, &{Name: "idx_orders", Table: "unsent_order", Columns: []string{"satellite_id", "serial_number"}}},
  }

        storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
        main.cmdRun:196
        storj.io/storj/pkg/process.cleanup.func1.2:299
        storj.io/storj/pkg/process.cleanup.func1:317
        github.com/spf13/cobra.(*Command).execute:826
        github.com/spf13/cobra.(*Command).ExecuteC:914
        github.com/spf13/cobra.(*Command).Execute:864
        storj.io/storj/pkg/process.ExecWithCustomConfig:79
        storj.io/storj/pkg/process.Exec:61
        main.main:326
        runtime.main:203

This is driving me crazy :confused:

Please don’t remove any databases unless instructed to. Not all of them can be removed without causing significant issues. The code Alexey linked to was specifically for the orders.db and will only work for that. Can you explain why you also removed 2 others? I see no errors suggesting they had issues.

Hiho.

I tried it this way, cause all of them throw

Error: Error during preflight check for storagenode databases: storage node preflight database error: XXX: expected schema does not match actual:

best regards
Michael

I’ll really appreciate if you will follow the suggestion and do not mesh with other databases until we got fixed the first one.
Please, fix the orders.db first, as suggested there:

For used_serial.db we will do the similar:

  1. Please, restore the used_serial.db from backup
  2. Execute:
sqlite3 used_serial.db
drop index pk_used_serial_;
CREATE INDEX pk_used_serial_ ON used_serial_(satellite_id, serial_number);
.exit
1 Like

Stopped node
Restored all from backup
Executed both SQLs
Start node

Results in

storagenode_1    | Error: Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual:   &dbschema.Schema{
storagenode_1    |      Tables: []*dbschema.Table{
storagenode_1    |              &{Name: "order_archive_", Columns: []*dbschema.Column{&{Name: "archived_at", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "status", Type: "INTEGER"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
storagenode_1    | +            &{
storagenode_1    | +                    Name: "test_table",
storagenode_1    | +                    Columns: []*dbschema.Column{
storagenode_1    | +                            &{Name: "id", Type: "int"},
storagenode_1    | +                            &{Name: "name", Type: "varchar(30)", IsNullable: true},
storagenode_1    | +                    },
storagenode_1    | +                    PrimaryKey: []string{"id"},
storagenode_1    | +            },
storagenode_1    |              &{Name: "unsent_order", Columns: []*dbschema.Column{&{Name: "order_limit_expiration", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
storagenode_1    |      },
storagenode_1    |      Indexes: []*dbschema.Index{&{Name: "idx_order_archived_at", Table: "order_archive_", Columns: []string{"archived_at"}}, &{Name: "idx_orders", Table: "unsent_order", Columns: []string{"satellite_id", "serial_number"}}},
storagenode_1    |   }
storagenode_1    |
storagenode_1    |      storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
storagenode_1    |      main.cmdRun:196
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1.2:299
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1:317
storagenode_1    |      github.com/spf13/cobra.(*Command).execute:826
storagenode_1    |      github.com/spf13/cobra.(*Command).ExecuteC:914
storagenode_1    |      github.com/spf13/cobra.(*Command).Execute:864
storagenode_1    |      storj.io/storj/pkg/process.ExecWithCustomConfig:79
storagenode_1    |      storj.io/storj/pkg/process.Exec:61
storagenode_1    |      main.main:326
storagenode_1    |      runtime.main:203
storagenode_1 exited with code 1

best regards
Michael

Please, stop the storagenode, do not restore from a backup, execute this command:

sqlite3 orders.db
drop table test_table;
.exit

I executed your last called commands.

On start the used_serial.db is again with preflight error.
The previous commands for the used_serial.db result in the same error:

storagenode_1    | 2020-02-15T20:48:44.124Z     INFO    db.migration    Database Version        {"version": 31}
storagenode_1    | Error: Error during preflight check for storagenode databases: storage node preflight database error: used_serial: expected schema does not match actual:   &dbschema.Schema{
storagenode_1    |      Tables: []*dbschema.Table{
storagenode_1    | +            &{
storagenode_1    | +                    Name: "test_table",
storagenode_1    | +                    Columns: []*dbschema.Column{
storagenode_1    | +                            &{Name: "id", Type: "int"},
storagenode_1    | +                            &{Name: "name", Type: "varchar(30)", IsNullable: true},
storagenode_1    | +                    },
storagenode_1    | +                    PrimaryKey: []string{"id"},
storagenode_1    | +            },
storagenode_1    |              &{Name: "used_serial_", Columns: []*dbschema.Column{&{Name: "expiration", Type: "TIMESTAMP"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}}},
storagenode_1    |      },
storagenode_1    |      Indexes: []*dbschema.Index{&{Name: "idx_used_serial_", Table: "used_serial_", Columns: []string{"expiration"}}, &{Name: "pk_used_serial_", Table: "used_serial_", Columns: []string{"satellite_id", "serial_number"}}},
storagenode_1    |   }
storagenode_1    |
storagenode_1    |      storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
storagenode_1    |      main.cmdRun:196
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1.2:299
storagenode_1    |      storj.io/storj/pkg/process.cleanup.func1:317
storagenode_1    |      github.com/spf13/cobra.(*Command).execute:826
storagenode_1    |      github.com/spf13/cobra.(*Command).ExecuteC:914
storagenode_1    |      github.com/spf13/cobra.(*Command).Execute:864
storagenode_1    |      storj.io/storj/pkg/process.ExecWithCustomConfig:79
storagenode_1    |      storj.io/storj/pkg/process.Exec:61
storagenode_1    |      main.main:326
storagenode_1    |      runtime.main:203
storagenode_1 exited with code 1

best regards
Michael

Please, use the same command for the used_seral.db

I ran out of space, stopped docker, did a little cleanup on a disk, started docker and got this preflight error.

Error: Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{
Tables: *dbschema.Table{
&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},
&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},

@Alexey help, please

Please, stop your storagenode. Install the sqlite3 or use the docker container.
Make backup of bandwidth.db (for example, just copy it to bandwidth.db.bak)

Then use this command:


but replace the orders.db to the bandwidth.db

Then try to start the storagenode.

You should always allow for 10% overhead of space on your drive to pervent this in the future.

1 Like

Thx Alexey.

Node finally started back up again. Now some Errors in the logs about Piecstrore Cache, but they are getting less.

Hopefully that fixed all and my node is not put to offline or something, cause it was down for the days now (1S5aWN72kH35pTbxJP65McV3yi7bJPmnuAESxRUNBK4xSjYJAq)

Thank you very much and best regards
Michael

1 Like

I don’t have file named bandwidth.db
When I run .tables on orders.db there is no table named test_table
When I cp orders.db bandwidth.db I get this:
Error: Error creating tables for master database on storagenode: migrate: no such tables: main.order_archive_

You should not copy the orders.db to the bandwidth.db, they are not compatible.
You should have a bandwidth.db in your storage folder, otherwise you would not have such error in the first place.

I understand to do cp orders.db bandwidth.db
Even if I had now it is orders.db
So everything lost?

I mean replacing the part of the command contained orders.db with a bandwidth.db.
Not the copying/moving.
You have lost the bandwidth usage stat for your node, nothing else.

Just restore it from the backup which you did before (keep the backup for a while) and execute this:

sqlite3 bandwidth.db
drop table test_table;
.exit