Bandwidth Error After Upgrade to 31.9

Ran into a problem this morning after my storage node upgraded to 31.9:

Unrecoverable error {“error”: "Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual:

Any idea on how to get this fixed?

1 Like

Getting the same error on my node as well.
node keeps restarting.

It looks that there was some problem with any of the bandwidth database migration in your SNs.

I think that there wasn’t any bandwidth database migration in the last release, but I will ask for confirming it.

What it has been changed in this last version is that that the database schema check is enabled by default and that’s causing your nodes to fail and exit.

In the meantime, you can disable such check adding to your node configuration file preflight.database-check: false.

This database check verifies that the databases have the expected schema, which means that all the migrations have been running fine (unless that there is some bug somewhere).

Please, disable the check and let us know if your nodes run again.

On the other hand, please do the following instructions to know how the schema of your bandwidth databases looks like:

$ sqlite3 {storage_dir}/storage/bandwidth.db
SQLite version 3.16.2 2017-01-06 16:32:41
Enter ".help" for usage hints.
sqlite> .output bandwidth-schema.sql
sqlite> .schema
sqlite> .quit

You’ll get a new file bandwidth-schema.sql in the directory that you executed the sqlite3 command.
Open the file and paste the content in a response in the thread.

1 Like

@S0litiare @mitchellds one more thing.

Please, before disabling the check, post the complete error message that you got, it turns that it should contain relevant information to find out how to fix your nodes.

Thanks in advance.

I’m getting the errors in 2 databases "orders.db and “used_serial.db”
here are the schema for the 2 effected db’s in my node:

used_serial.db:

CREATE TABLE used_serial_ (
						satellite_id  BLOB NOT NULL,
						serial_number BLOB NOT NULL,
						expiration    TIMESTAMP NOT NULL
					);
CREATE INDEX idx_used_serial_ ON used_serial_(expiration);

orders.db:

CREATE TABLE unsent_order (
						satellite_id  BLOB NOT NULL,
						serial_number BLOB NOT NULL,

						order_limit_serialized BLOB      NOT NULL, -- serialized pb.OrderLimit
						order_serialized       BLOB      NOT NULL, -- serialized pb.Order
						order_limit_expiration TIMESTAMP NOT NULL, -- when is the deadline for sending it

						uplink_cert_id INTEGER NOT NULL,

						FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
					);
CREATE TABLE order_archive_ (
						satellite_id  BLOB NOT NULL,
						serial_number BLOB NOT NULL,

						order_limit_serialized BLOB NOT NULL,
						order_serialized       BLOB NOT NULL,

						uplink_cert_id INTEGER NOT NULL,

						status      INTEGER   NOT NULL,
						archived_at TIMESTAMP NOT NULL,

						FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
					);
CREATE TABLE versions (version int, commited_at text);
CREATE INDEX idx_order_archived_at ON order_archive_(archived_at);

Changed " preflight.database-check:" to False and node is running for now.

My node is running fine and these ones are the schemas of those 2 DBs

used_serial.db

CREATE TABLE used_serial_ (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,
                                                expiration    TIMESTAMP NOT NULL
                                        );
CREATE UNIQUE INDEX pk_used_serial_ ON used_serial_(satellite_id, serial_number);
CREATE INDEX idx_used_serial_ ON used_serial_(expiration);

orders.db

CREATE TABLE unsent_order (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,

                                                order_limit_serialized BLOB      NOT NULL, -- serialized pb.OrderLimit
                                                order_serialized       BLOB      NOT NULL, -- serialized pb.Order
                                                order_limit_expiration TIMESTAMP NOT NULL, -- when is the deadline for sending it

                                                uplink_cert_id INTEGER NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
                                        );
CREATE TABLE order_archive_ (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,

                                                order_limit_serialized BLOB NOT NULL,
                                                order_serialized       BLOB NOT NULL,

                                                uplink_cert_id INTEGER NOT NULL,

                                                status      INTEGER   NOT NULL,
                                                archived_at TIMESTAMP NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
                                        );
CREATE UNIQUE INDEX idx_orders ON unsent_order(satellite_id, serial_number);
CREATE TABLE versions (version int, commited_at text);
CREATE INDEX idx_order_archived_at ON order_archive_(archived_at);

Basically, your DBs are missing a UNIQUE index each one.

We’ll come to you with solution, we are discussing about this.

1 Like

Please don’t do that. We have introduced the check for a reason. There is no point in keeping the node running in a potential bad state.

An easy way to solve the issue would be to stop the storage node, delete the file (Only if it is the order.db or used_serial.db!), restart the storage node. The storage node will create a new order.db file and you should pass the preflight check just fine. Deleting the file will delete a few unsubmitted orders or used serials.

1 Like

In order to see the exact difference that is triggering the error message so you can debug your schema, you can paste the error message into the file and run the following to see more legible output:
$ cat err.txt | sed 's/\\n/\n/g; s/\\t/\t/g' >> output.txt

I’ve stuck the error in pastebin if it’s still mangled by the forum:
https://pastebin.com/dSyBBNZ4

error message:

Unrecoverable error {“error”: "Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual: &dbschema.Schema{
Tables: *dbschema.Table{&{Name: “order_archive_”, Columns: *dbschema.Column{&{Name: “archived_at”, Type: “TIMESTAMP”}, &{Name: “order_limit_serialized”, Type: “BLOB”}, &{Name: “order_serialized”, Type: “BLOB”}, &{Name: “satellite_id”, Type: “BLOB”}, &{Name: “serial_number”, Type: “BLOB”}, &{Name: “status”, Type: “INTEGER”}, &{Name: “uplink_cert_id”, Type: “INTEGER”, Reference: &dbschema.Reference{Table: “certificate”, Column: “cert_id”}}}}, &{Name: “unsent_order”, Columns: *dbschema.Column{&{Name: “order_limit_expiration”, Type: “TIMESTAMP”}, &{Name: “order_limit_serialized”, Type: “BLOB”}, &{Name: “order_serialized”, Type: “BLOB”}, &{Name: “satellite_id”, Type: “BLOB”}, &{Name: “serial_number”, Type: “BLOB”}, &{Name: “uplink_cert_id”, Type: “INTEGER”, Reference: &dbschema.Reference{Table: “certificate”, Column: “cert_id”}}}}},
Indexes: *dbschema.Index{
&{Name: “idx_order_archived_at”, Table: “order_archive_”, Columns: string{“archived_at”}},
- &{
- Name: “idx_orders”,
- Table: “unsent_order”,
- Columns: string{“satellite_id”, “serial_number”},
- },
},
}

	storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
	main.cmdRun:196
	storj.io/storj/pkg/process.cleanup.func1.2:299
	storj.io/storj/pkg/process.cleanup.func1:317
	github.com/spf13/cobra.(*Command).execute:826
	github.com/spf13/cobra.(*Command).ExecuteC:914
	github.com/spf13/cobra.(*Command).Execute:864
	storj.io/storj/pkg/process.ExecWithCustomConfig:79
	storj.io/storj/pkg/process.Exec:61
	main.main:326
	runtime.main:203", "errorVerbose": "Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual:   &dbschema.Schema{
  	Tables: []*dbschema.Table{&{Name: \"order_archive_\", Columns: []*dbschema.Column{&{Name: \"archived_at\", Type: \"TIMESTAMP\"}, &{Name: \"order_limit_serialized\", Type: \"BLOB\"}, &{Name: \"order_serialized\", Type: \"BLOB\"}, &{Name: \"satellite_id\", Type: \"BLOB\"}, &{Name: \"serial_number\", Type: \"BLOB\"}, &{Name: \"status\", Type: \"INTEGER\"}, &{Name: \"uplink_cert_id\", Type: \"INTEGER\", Reference: &dbschema.Reference{Table: \"certificate\", Column: \"cert_id\"}}}}, &{Name: \"unsent_order\", Columns: []*dbschema.Column{&{Name: \"order_limit_expiration\", Type: \"TIMESTAMP\"}, &{Name: \"order_limit_serialized\", Type: \"BLOB\"}, &{Name: \"order_serialized\", Type: \"BLOB\"}, &{Name: \"satellite_id\", Type: \"BLOB\"}, &{Name: \"serial_number\", Type: \"BLOB\"}, &{Name: \"uplink_cert_id\", Type: \"INTEGER\", Reference: &dbschema.Reference{Table: \"certificate\", Column: \"cert_id\"}}}}},
  	Indexes: []*dbschema.Index{
  		&{Name: \"idx_order_archived_at\", Table: \"order_archive_\", Columns: []string{\"archived_at\"}},
- 		&{
- 			Name:    \"idx_orders\",
- 			Table:   \"unsent_order\",
- 			Columns: []string{\"satellite_id\", \"serial_number\"},
- 		},
  	},
  }

	storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
	main.cmdRun:196
	storj.io/storj/pkg/process.cleanup.func1.2:299
	storj.io/storj/pkg/process.cleanup.func1:317
	github.com/spf13/cobra.(*Command).execute:826
	github.com/spf13/cobra.(*Command).ExecuteC:914
	github.com/spf13/cobra.(*Command).Execute:864
	storj.io/storj/pkg/process.ExecWithCustomConfig:79
	storj.io/storj/pkg/process.Exec:61
	main.main:326
	runtime.main:203
	main.cmdRun:198
	storj.io/storj/pkg/process.cleanup.func1.2:299
	storj.io/storj/pkg/process.cleanup.func1:317
	github.com/spf13/cobra.(*Command).execute:826
	github.com/spf13/cobra.(*Command).ExecuteC:914
	github.com/spf13/cobra.(*Command).Execute:864
	storj.io/storj/pkg/process.ExecWithCustomConfig:79
	storj.io/storj/pkg/process.Exec:61
	main.main:326
	runtime.main:203"}

Could you try this?

$ sqlite3 /storage/node/config/dir/storage/orders.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> CREATE UNIQUE INDEX idx_orders ON unsent_order(satellite_id, serial_number);
3 Likes

OK looks like it worked for the orders db, NO errors being logged for it now.

I’m just getting the error for “used_serial.db” now.

Could you show the diff for that one too? I don’t see one for used serial db, but I can give you another sql command to run to fix that issue.

to clarify, I mean could you give me the formatted error message after piping it through sed as described above?

Sorry,

https://pastebin.com/SaBUTdWS

Unrecoverable error {“error”: "Error during preflight check for storagenode databases: storage node preflight database error: used_serial: expected schema does not match actual: &dbschema.Schema{
Tables: *dbschema.Table{&{Name: “used_serial_”, Columns: *dbschema.Column{&{Name: “expiration”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}, &{Name: “serial_number”, Type: “BLOB”}}}},
Indexes: *dbschema.Index{
&{Name: “idx_used_serial_”, Table: “used_serial_”, Columns: string{“expiration”}},

Thank you. You can try

$ sqlite3 /storage/node/config/dir/storage/used_serial.db
SQLite version 3.22.0 2018-01-22 18:45:57
Enter ".help" for usage hints.
sqlite> CREATE UNIQUE INDEX pk_used_serial_ ON used_serial_(satellite_id, serial_number);

FYI, you can make the pasted data look pretty by using back ticks:

```
paste here
```

1 Like

Thanks.

Node is up and running.
NO errors being reported.
Things look good now.

3 Likes

index

3 Likes

Well, I went through the process described at https://support.storj.io/hc/en-us/articles/360029309111-How-to-fix-a-database-disk-image-is-malformed- to repair my bandwidth database file. The resulting database file had a size greater than zero. Tried to start the storagenode back up and I still get the following error:

Unrecoverable error {“error”: "Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual:

In the past, I think Alexy sent me a blank bandwidth database file to use as a replacement. Killed the statistics but the storagenode worked. Is that still a possibility?

Could you follow the process described in this comment and show us the output so we can determine whether it is something that could be easily fixed with a sql query?

Thanks for the reply. I grabbed Alexy’s replacement bandwidth database file and used that one. Killed all of my storagenode statistics but my node is now back up and running.

Here is the full error:

2020-01-30T15:20:19.140Z DEBUG Unrecoverable error {“error”: “Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{\n \tTables: *dbschema.Table{\n \t\t&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},\n \t\t&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},\n+ \t\t&{\n+ \t\t\tName: “unsent_order”,\n+ \t\t\tColumns: *dbschema.Column{\n+ \t\t\t\t&{Name: “order_limit_expiration”, Type: “TIMESTAMP”},\n+ \t\t\t\t&{Name: “order_limit_serialized”, Type: “BLOB”},\n+ \t\t\t\t&{Name: “order_serialized”, Type: “BLOB”},\n+ \t\t\t\t&{Name: “satellite_id”, Type: “BLOB”},\n+ \t\t\t\t&{Name: “serial_number”, Type: “BLOB”},\n+ \t\t\t\t&{\n+ \t\t\t\t\tName: “uplink_cert_id”,\n+ \t\t\t\t\tType: “INTEGER”,\n+ \t\t\t\t\tReference: &dbschema.Reference{Table: “certificate”, Column: “cert_id”},\n+ \t\t\t\t},\n+ \t\t\t},\n+ \t\t},\n \t},\n \tIndexes: *dbschema.Index{&{Name: “idx_bandwidth_usage_created”, Table: “bandwidth_usage”, Columns: string{“created_at”}}, &{Name: “idx_bandwidth_usage_satellite”, Table: “bandwidth_usage”, Columns: string{“satellite_id”}}},\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317\n\tmain.cmdRun:196\n\tstorj.io/storj/pkg/process.cleanup.func1.2:299\n\tstorj.io/storj/pkg/process.cleanup.func1:317\n\tgithub.com/spf13/cobra.(*Command).execute:826\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:914\n\tgithub.com/spf13/cobra.(*Command).Execute:864\n\tstorj.io/storj/pkg/process.ExecWithCustomConfig:79\n\tstorj.io/storj/pkg/process.Exec:61\n\tmain.main:326\n\truntime.main:203”, “errorVerbose”: “Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{\n \tTables: *dbschema.Table{\n \t\t&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},\n \t\t&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},\n+ \t\t&{\n+ \t\t\tName: “unsent_order”,\n+ \t\t\tColumns: *dbschema.Column{\n+ \t\t\t\t&{Name: “order_limit_expiration”, Type: “TIMESTAMP”},\n+ \t\t\t\t&{Name: “order_limit_serialized”, Type: “BLOB”},\n+ \t\t\t\t&{Name: “order_serialized”, Type: “BLOB”},\n+ \t\t\t\t&{Name: “satellite_id”, Type: “BLOB”},\n+ \t\t\t\t&{Name: “serial_number”, Type: “BLOB”},\n+ \t\t\t\t&{\n+ \t\t\t\t\tName: “uplink_cert_id”,\n+ \t\t\t\t\tType: “INTEGER”,\n+ \t\t\t\t\tReference: &dbschema.Reference{Table: “certificate”, Column: “cert_id”},\n+ \t\t\t\t},\n+ \t\t\t},\n+ \t\t},\n \t},\n \tIndexes: *dbschema.Index{&{Name: “idx_bandwidth_usage_created”, Table: “bandwidth_usage”, Columns: string{“created_at”}}, &{Name: “idx_bandwidth_usage_satellite”, Table: “bandwidth_usage”, Columns: string{“satellite_id”}}},\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317\n\tmain.cmdRun:196\n\tstorj.io/storj/pkg/process.cleanup.func1.2:299\n\tstorj.io/storj/pkg/process.cleanup.func1:317\n\tgithub.com/spf13/cobra.(*Command).execute:826\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:914\n\tgithub.com/spf13/cobra.(*Command).Execute:864\n\tstorj.io/storj/pkg/process.ExecWithCustomConfig:79\n\tstorj.io/storj/pkg/process.Exec:61\n\tmain.main:326\n\truntime.main:203\n\tmain.cmdRun:198\n\tstorj.io/storj/pkg/process.cleanup.func1.2:299\n\tstorj.io/storj/pkg/process.cleanup.func1:317\n\tgithub.com/spf13/cobra.(*Command).execute:826\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:914\n\tgithub.com/spf13/cobra.(*Command).Execute:864\n\tstorj.io/storj/pkg/process.ExecWithCustomConfig:79\n\tstorj.io/storj/pkg/process.Exec:61\n\tmain.main:326\n\truntime.main:203”}
Error: Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{
Tables: *dbschema.Table{
&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},
&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},

  •           &{
    
  •                   Name: "unsent_order",
    
  •                   Columns: []*dbschema.Column{
    
  •                           &{Name: "order_limit_expiration", Type: "TIMESTAMP"},
    
  •                           &{Name: "order_limit_serialized", Type: "BLOB"},
    
  •                           &{Name: "order_serialized", Type: "BLOB"},
    
  •                           &{Name: "satellite_id", Type: "BLOB"},
    
  •                           &{Name: "serial_number", Type: "BLOB"},
    
  •                           &{
    
  •                                   Name:      "uplink_cert_id",
    
  •                                   Type:      "INTEGER",
    
  •                                   Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"},
    
  •                           },
    
  •                   },
    
  •           },
      },
      Indexes: []*dbschema.Index{&{Name: "idx_bandwidth_usage_created", Table: "bandwidth_usage", Columns: []string{"created_at"}}, &{Name: "idx_bandwidth_usage_satellite", Table: "bandwidth_usage", Columns: []string{"satellite_id"}}},
    

    }

      storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:317
      main.cmdRun:196
      storj.io/storj/pkg/process.cleanup.func1.2:299
      storj.io/storj/pkg/process.cleanup.func1:317
      github.com/spf13/cobra.(*Command).execute:826
      github.com/spf13/cobra.(*Command).ExecuteC:914
      github.com/spf13/cobra.(*Command).Execute:864
      storj.io/storj/pkg/process.ExecWithCustomConfig:79
      storj.io/storj/pkg/process.Exec:61
      main.main:326
      runtime.main:203