You shouldn’t start from scratch. If you have a backup of the orders.db
, you can restore it and fix the same way as there:
No Luck.
The SQL command returns no such table: main.unsent_order
You do not have a backup?
Ok, apply this script:
CREATE TABLE unsent_order (
satellite_id BLOB NOT NULL,
serial_number BLOB NOT NULL,
order_limit_serialized BLOB NOT NULL, -- serialized pb.OrderLi
mit
order_serialized BLOB NOT NULL, -- serialized pb.Order
order_limit_expiration TIMESTAMP NOT NULL, -- when is the deadline
for sending it
uplink_cert_id INTEGER NOT NULL,
FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
);
CREATE TABLE order_archive_ (
satellite_id BLOB NOT NULL,
serial_number BLOB NOT NULL,
order_limit_serialized BLOB NOT NULL,
order_serialized BLOB NOT NULL,
uplink_cert_id INTEGER NOT NULL,
status INTEGER NOT NULL,
archived_at TIMESTAMP NOT NULL,
FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
);
CREATE UNIQUE INDEX idx_orders ON unsent_order(satellite_id, serial_number);
CREATE TABLE versions (version int, commited_at text);
CREATE INDEX idx_order_archived_at ON order_archive_(archived_at);
You can ignore errors if the object is exist. It will create only missed parts.
Because there were so many things that were tried on the DB, I lost track of all the versions. I found an old version and was able to run the command (further above) successfully and it appears I’m up.
I’m having similar problems with my node. Integrity check on all databases seems to be fine but it still fails.
2020-02-23T14:31:03.221Z INFO version running on version v0.33.4
2020-02-23T14:31:03.233Z INFO db.migration Database Version {"version": 31}
Error: Error during preflight check for storagenode databases: storage node preflight database error: orders: expected schema does not match actual: &dbschema.Schema{
Tables: []*dbschema.Table{
&{Name: "order_archive_", Columns: []*dbschema.Column{&{Name: "archived_at", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "status", Type: "INTEGER"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
+ &{
+ Name: "test_table",
+ Columns: []*dbschema.Column{
+ &{Name: "id", Type: "int"},
+ &{Name: "name", Type: "varchar(30)", IsNullable: true},
+ },
+ PrimaryKey: []string{"id"},
+ },
&{Name: "unsent_order", Columns: []*dbschema.Column{&{Name: "order_limit_expiration", Type: "TIMESTAMP"}, &{Name: "order_limit_serialized", Type: "BLOB"}, &{Name: "order_serialized", Type: "BLOB"}, &{Name: "satellite_id", Type: "BLOB"}, &{Name: "serial_number", Type: "BLOB"}, &{Name: "uplink_cert_id", Type: "INTEGER", Reference: &dbschema.Reference{Table: "certificate", Column: "cert_id"}}}},
},
Indexes: []*dbschema.Index{&{Name: "idx_order_archived_at", Table: "order_archive_", Columns: []string{"archived_at"}}, &{Name: "idx_orders", Table: "unsent_order", Columns: []string{"satellite_id", "serial_number"}}},
}
storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:301
main.cmdRun:198
storj.io/storj/pkg/process.cleanup.func1.2:307
storj.io/storj/pkg/process.cleanup.func1:325
github.com/spf13/cobra.(*Command).execute:826
github.com/spf13/cobra.(*Command).ExecuteC:914
github.com/spf13/cobra.(*Command).Execute:864
storj.io/storj/pkg/process.ExecWithCustomConfig:84
storj.io/storj/pkg/process.ExecCustomDebug:66
main.main:328
runtime.main:203
Stop your storagenode, make a backup of orders.db
, then execute (you can use the native sqlite3 binary or use the docker version of sqlite3):
sqlite3 orders.db
drop table test_table;
.exit
Then start the node.
This fixed the problem, thank you.
A post was merged into an existing topic: Ordersdb error: database disk image is malformed