Orders Database - File is Not in Database Error

@deathlessdd I will seriously consider that. Job number one is to get the bad boy back up and running. I’m over-invested at this point and I would hate to lose all of my escrowed funding.

Once I get him back up and stable, hopefully, I will consider a deliberate move over to Windows.

Does anybody know what used_serial.db is used for? Would a properly formatted but blank copy of used_serial.db work to get him running again?

It’s used for storing used serials. This prevents uplinks from using the same serial for multiple transfers. Yes as far as I know you can use an empty db with the right empty tables in it and it should work just fine.

@BrightSilence I found one of your earlier posts where you posted a link to a blank used_serial.db file.

I grabbed it. Thank you, in advance.

Once my orders.db gets rebuilt, which at the current pace, should take three days (I know, right?), I will use the blank file you posted. :slight_smile:

Six hours got me 11% complete on the rebuild so far.

You could copy the db onto a SSD, that will make it faster. or if you have enough ram create a ramdisk big enough

That was created a long time ago and may no longer be in the correct format. Just to be sure, use this one.

https://alpha.transfer.sh/MWHrp/used_serial.db
Note to anyone running into this post later on: Created for nodes running v0.33.4, may not work on other/newer versions.

@BrightSilence Thank you VERY much! I grabbed it. I will let you know how things work out!

I totally get how to copy the db onto an SSD. That’s not a problem. But how do you split the databases from the actual stored files? They are typically on the same drive in the same folder structure.

For example, c:\storagenode\storage <- everything is in there

you just copy the database file that you are currently repairing onto a faster drive. when you are done repairing you copy it back

but you can’t do it inside docker… you would need to run your tools externally
Oh I just realize that point 7 in the wiki actually suggests using tmpfs to speed things up but you’re on windows and that probably won’t work.
However using sqlite after installing it on windows would work and you can then repair your db anywhere you copy it.

Everything finished up this morning and I ran integrity checks on all of the databases as follows:

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\pieceinfo.db,destination=/pieceinfo.db sstc/sqlite3 sqlite3 /pieceinfo.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\info.db,destination=/info.db sstc/sqlite3 sqlite3 /info.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\piece_spaced_used.db,destination=/piece_spaced_used.db sstc/sqlite3 sqlite3 /piece_spaced_used.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\reputation.db,destination=/reputation.db sstc/sqlite3 sqlite3 /reputation.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\storage_usage.db,destination=/storage_usage.db sstc/sqlite3 sqlite3 /storage_usage.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\piece_expiration.db,destination=/piece_expiration.db sstc/sqlite3 sqlite3 /piece_expiration.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\orders.db,destination=/orders.db sstc/sqlite3 sqlite3 /orders.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\satellites.db,destination=/satellites.db sstc/sqlite3 sqlite3 /satellites.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\notifications.db,destination=/notifications.db sstc/sqlite3 sqlite3 /notifications.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\bandwidth.db,destination=/bandwidth.db sstc/sqlite3 sqlite3 /bandwidth.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\used_serial.db,destination=/used_serial.db sstc/sqlite3 sqlite3 /used_serial.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\info.rebuild.db,destination=/info.rebuild.db sstc/sqlite3 sqlite3 /info.rebuild.db “PRAGMA integrity_check;”
ok

C:\Users\darren.mitchell\Desktop>docker run --rm -it --mount type=bind,source=d:\data\storage\piecestore.db,destination=/piecestore.db sstc/sqlite3 sqlite3 /piecestore.db “PRAGMA integrity_check;”
ok

…but, I found this madness from the rebuild of the orders.db this morning:

PS C:\Windows\system32> docker run --rm -it --mount type=bind,source=d:\data\storage,destination=/storage sstc/sqlite3 sh
/data # sqlite3 /storage/orders.db
SQLite version 3.28.0 2019-04-16 19:49:53
Enter “.help” for usage hints.
sqlite> .mode insert
sqlite> .output /storage/dump_all.sql
sqlite> .dump
sqlite> .exit
/data # cat /storage/dump_all.sql | grep -v TRANSACTION | grep -v ROLLBACK >/storage/dump_all_notrans.sql
/data # rm /storage/orders.db
/data # sqlite3 /storage/orders.db “.read /storage/dump_all_notrans.sql”
Error: near line 110519: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110520: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110521: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110522: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110523: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110524: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110525: NOT NULL constraint failed: order_archive_.archived_at
Error: near line 110526: NOT NULL constraint failed: order_archive_.archived_at
/data # exit

I tried to start the storagenode. It gets in this endless cycle of attempting to restart itself.

Conclusion: the original problem has not changed.

But, I did find a new error in the docker logs:

2020-02-24T07:57:55.757Z DEBUG Unrecoverable error {“error”: “Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{\n \tTables: *dbschema.Table{\n \t\t&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},\n \t\t&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},\n+ \t\t&{\n+ \t\t\tName: “test_table”,\n+ \t\t\tColumns: *dbschema.Column{\n+ \t\t\t\t&{Name: “id”, Type: “int”},\n+ \t\t\t\t&{Name: “name”, Type: “varchar(30)”, IsNullable: true},\n+ \t\t\t},\n+ \t\t\tPrimaryKey: string{“id”},\n+ \t\t},\n \t},\n \tIndexes: *dbschema.Index{&{Name: “idx_bandwidth_usage_created”, Table: “bandwidth_usage”, Columns: string{“created_at”}}, &{Name: “idx_bandwidth_usage_satellite”, Table: “bandwidth_usage”, Columns: string{“satellite_id”}}},\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:301\n\tmain.cmdRun:198\n\tstorj.io/storj/pkg/process.cleanup.func1.2:307\n\tstorj.io/storj/pkg/process.cleanup.func1:325\n\tgithub.com/spf13/cobra.(*Command).execute:826\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:914\n\tgithub.com/spf13/cobra.(*Command).Execute:864\n\tstorj.io/storj/pkg/process.ExecWithCustomConfig:84\n\tstorj.io/storj/pkg/process.ExecCustomDebug:66\n\tmain.main:328\n\truntime.main:203”, “errorVerbose”: “Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{\n \tTables: *dbschema.Table{\n \t\t&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},\n \t\t&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},\n+ \t\t&{\n+ \t\t\tName: “test_table”,\n+ \t\t\tColumns: *dbschema.Column{\n+ \t\t\t\t&{Name: “id”, Type: “int”},\n+ \t\t\t\t&{Name: “name”, Type: “varchar(30)”, IsNullable: true},\n+ \t\t\t},\n+ \t\t\tPrimaryKey: string{“id”},\n+ \t\t},\n \t},\n \tIndexes: *dbschema.Index{&{Name: “idx_bandwidth_usage_created”, Table: “bandwidth_usage”, Columns: string{“created_at”}}, &{Name: “idx_bandwidth_usage_satellite”, Table: “bandwidth_usage”, Columns: string{“satellite_id”}}},\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:301\n\tmain.cmdRun:198\n\tstorj.io/storj/pkg/process.cleanup.func1.2:307\n\tstorj.io/storj/pkg/process.cleanup.func1:325\n\tgithub.com/spf13/cobra.(*Command).execute:826\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:914\n\tgithub.com/spf13/cobra.(*Command).Execute:864\n\tstorj.io/storj/pkg/process.ExecWithCustomConfig:84\n\tstorj.io/storj/pkg/process.ExecCustomDebug:66\n\tmain.main:328\n\truntime.main:203\n\tmain.cmdRun:200\n\tstorj.io/storj/pkg/process.cleanup.func1.2:307\n\tstorj.io/storj/pkg/process.cleanup.func1:325\n\tgithub.com/spf13/cobra.(*Command).execute:826\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:914\n\tgithub.com/spf13/cobra.(*Command).Execute:864\n\tstorj.io/storj/pkg/process.ExecWithCustomConfig:84\n\tstorj.io/storj/pkg/process.ExecCustomDebug:66\n\tmain.main:328\n\truntime.main:203”}
Error: Error during preflight check for storagenode databases: storage node preflight database error: bandwidth: expected schema does not match actual: &dbschema.Schema{
Tables: *dbschema.Table{
&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}},
&{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}},

  •           &{
    
  •                   Name: "test_table",
    
  •                   Columns: []*dbschema.Column{
    
  •                           &{Name: "id", Type: "int"},
    
  •                           &{Name: "name", Type: "varchar(30)", IsNullable: true},
    
  •                   },
    
  •                   PrimaryKey: []string{"id"},
    
  •           },
      },
      Indexes: []*dbschema.Index{&{Name: "idx_bandwidth_usage_created", Table: "bandwidth_usage", Columns: []string{"created_at"}}, &{Name: "idx_bandwidth_usage_satellite", Table: "bandwidth_usage", Columns: []string{"satellite_id"}}},
    

    }

      storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:301
      main.cmdRun:198
      storj.io/storj/pkg/process.cleanup.func1.2:307
      storj.io/storj/pkg/process.cleanup.func1:325
      github.com/spf13/cobra.(*Command).execute:826
      github.com/spf13/cobra.(*Command).ExecuteC:914
      github.com/spf13/cobra.(*Command).Execute:864
      storj.io/storj/pkg/process.ExecWithCustomConfig:84
      storj.io/storj/pkg/process.ExecCustomDebug:66
      main.main:328
      runtime.main:203

So, now I am going through the same process of rebuilding the bandwidth.db file.

I almost gave up on V3 last year in april since docker drove me absolutely insane…

IF I am lucky enough to get this storagenode back up and running, I will begin the process of figuring out how to migrate over to the Windows GUI version.

Thanks for the tip!

So, the bandwidth.db database repair is now complete. Got the following result:

PS C:\Windows\system32> docker run --rm -it --mount type=bind,source=d:\data\storage,destination=/storage sstc/sqlite3 sh
/data # sqlite3 /storage/bandwidth.db
SQLite version 3.28.0 2019-04-16 19:49:53
Enter “.help” for usage hints.
sqlite> .mode insert
sqlite> .output /storage/dump_all.sql
sqlite> .dump
sqlite> .exit
/data # cat /storage/dump_all.sql | grep -v TRANSACTION | grep -v ROLLBACK >/storage/dump_all_notrans.sql
/data # rm /storage/bandwidth.db
/data # sqlite3 /storage/bandwidth.db “.read /storage/dump_all_notrans.sql”
Error: near line 8621: cannot commit - no transaction is active
/data #

OK, so I replaced the bandwidth.db file with a blank one provided by Alexy.

Started the storagenode.

BAM!

It works now! Knock on wood.

Now, watch what happens. It will probably crash on me again. :slight_smile:

1 Like

Please, disable the write cache for your drive.

@Alexey Thanks for the tip. I did check…all of the caching is turned off.