FATAL Unrecoverable error {"error": "Error during preflight check for storagenode databases "bandwidth"

HI, Have been at this all day, for some reason after a reboot storj stopped working, tracked it as far as the bandwidth db.

Im using Win 10 Gui

Have completed a fix as per instructions on - - How to fix a “database disk image is malformed”
All DBs are now showing as OK when tested as per the instructions.

This is the current fail I am getting

2020-09-27T20:36:41.808+1000 INFO db.migration Database Version {“version”: 45}
2020-09-27T20:36:42.603+1000 FATAL Unrecoverable error {“error”: “Error during preflight check for storagenode databases: preflight: database “bandwidth”: expected schema does not match actual: &dbschema.Schema{\n \tTables: *dbschema.Table{&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}}, &{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}}},\n- \tIndexes: *dbschema.Index{\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: \"\">,\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: \"\">,\n- \t},\n+ \tIndexes: nil,\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:437\n\tmain.cmdRun:194\n\tstorj.io/private/process.cleanup.func1.4:353\n\tstorj.io/private/process.cleanup.func1:371\n\tgithub.com/spf13/cobra.(*Command).execute:840\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:945\n\tgithub.com/spf13/cobra.(*Command).Execute:885\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:66\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”, “errorVerbose”: “Error during preflight check for storagenode databases: preflight: database “bandwidth”: expected schema does not match actual: &dbschema.Schema{\n \tTables: *dbschema.Table{&{Name: “bandwidth_usage”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “created_at”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}}, &{Name: “bandwidth_usage_rollups”, Columns: *dbschema.Column{&{Name: “action”, Type: “INTEGER”}, &{Name: “amount”, Type: “BIGINT”}, &{Name: “interval_start”, Type: “TIMESTAMP”}, &{Name: “satellite_id”, Type: “BLOB”}}, PrimaryKey: string{“action”, “interval_start”, “satellite_id”}}},\n- \tIndexes: *dbschema.Index{\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: \"\">,\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: \"\">,\n- \t},\n+ \tIndexes: nil,\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:437\n\tmain.cmdRun:194\n\tstorj.io/private/process.cleanup.func1.4:353\n\tstorj.io/private/process.cleanup.func1:371\n\tgithub.com/spf13/cobra.(*Command).execute:840\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:945\n\tgithub.com/spf13/cobra.(*Command).Execute:885\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:66\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\n\tmain.cmdRun:196\n\tstorj.io/private/process.cleanup.func1.4:353\n\tstorj.io/private/process.cleanup.func1:371\n\tgithub.com/spf13/cobra.(*Command).execute:840\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:945\n\tgithub.com/spf13/cobra.(*Command).Execute:885\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:66\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

Any help would be appreciated

Thank you

Is there a way to fix this? Im happy to dump any bandwidth earnings if a simple delete of the bandwidth db will fix this.

thank you

I really wish these errors would just point out the exact mismatch rather than dumping a fairly hard to read schema.

Alright, lets start over with a clean one. This will remove bandwidth stats, but shouldn’t impact payout. (bandwidth orders for payout are stored and sent from elsewhere)

  1. Stop the node

  2. Rename the bandwidth.db file to something like bandwidth.db-bak

  3. Run with your path

sqlite3 /your/path/to/bandwidth.db

Create new empty tables.

CREATE TABLE bandwidth_usage (
                                                satellite_id  BLOB    NOT NULL,
                                                action        INTEGER NOT NULL,
                                                amount        BIGINT  NOT NULL,
                                                created_at    TIMESTAMP NOT NULL
                                        );
CREATE TABLE bandwidth_usage_rollups (
                                                                                interval_start  TIMESTAMP NOT NULL,
                                                                                satellite_id    BLOB    NOT NULL,
                                                                                action          INTEGER NOT NULL,
                                                                                amount          BIGINT  NOT NULL,
                                                                                PRIMARY KEY ( interval_start, satellite_id, action )
                                                                        );
CREATE INDEX idx_bandwidth_usage_satellite ON bandwidth_usage(satellite_id);
CREATE INDEX idx_bandwidth_usage_created   ON bandwidth_usage(created_at);
  1. Exit sqlite
.quit
  1. Start node
2 Likes

Sir, Thank you very much

Worked like a charm

cheers
Donkey

Hello

I get the same error

2021-06-18T11:41:59.208+0300 FATAL Unrecoverable error {“error”: “Error during preflight check for storagenode databases: preflight: database “bandwidth”: expected schema does not match actual: &dbschema.Schema{\n- \tTables: *dbschema.Table{\n- \t\t(\n- \t\t\ts”""\n- \t\t\tName: bandwidth_usage\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: created_at\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t\t(\n- \t\t\ts"""\n- \t\t\tName: bandwidth_usage_rollups\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: interval_start\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t},\n+ \tTables: nil,\n- \tIndexes: *dbschema.Index{\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: \"\">,\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: \"\">,\n- \t},\n+ \tIndexes: nil,\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:405\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:352\n\tmain.cmdRun:208\n\tstorj.io/private/process.cleanup.func1.4:363\n\tstorj.io/private/process.cleanup.func1:381\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:64\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57", “errorVerbose”: “Error during preflight check for storagenode databases: preflight: database “bandwidth”: expected schema does not match actual: &dbschema.Schema{\n- \tTables: *dbschema.Table{\n- \t\t(\n- \t\t\ts”""\n- \t\t\tName: bandwidth_usage\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: created_at\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t\t(\n- \t\t\ts"""\n- \t\t\tName: bandwidth_usage_rollups\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: interval_start\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t},\n+ \tTables: nil,\n- \tIndexes: *dbschema.Index{\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: \"\">,\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: \"\">,\n- \t},\n+ \tIndexes: nil,\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:405\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:352\n\tmain.cmdRun:208\n\tstorj.io/private/process.cleanup.func1.4:363\n\tstorj.io/private/process.cleanup.func1:381\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:64\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\n\tmain.cmdRun:210\n\tstorj.io/private/process.cleanup.func1.4:363\n\tstorj.io/private/process.cleanup.func1:381\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:64\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

However, when I try your solution i get:

F:\Storj>sqlite3 F:\Storj\bandwidth.db
‘sqlite3’ is not recognized as an internal or external command,
operable program or batch file.

Please help!

I managed to get it going by downloading sqlite from this site:

https://www.sqlite.org/download.html

However, I think that I lost some money. I think before the activity I had over 2$ here but now I have 1.3:
image

You haven’t lost the earnings, just the local stats on your node. You get paid based on satellite stats, which will still be in tact. So nothing to worry about.

2 Likes

Hello

I get the same error too:

2023-02-03T21:05:03.431+0200 INFO db.migration Database Version {“version”: 54}
2023-02-03T21:05:03.460+0200 FATAL Unrecoverable error {“error”: “Error during preflight check for storagenode databases: preflight: database "piece_expiration": failed create test_table: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:426\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:360\n\tmain.cmdRun:241\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “errorVerbose”: “Error during preflight check for storagenode databases: preflight: database "piece_expiration": failed create test_table: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:426\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:360\n\tmain.cmdRun:241\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n\tmain.cmdRun:243\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}

I did a check: chkdsk d: /f and chkdsk d: /f /b and not to help :frowning:


ps:dont worry im ukranian)
UPD: This article helped me, I started the node using the substitution method https://support.storj.io/hc/en-us/articles/4403032417044-How-to-fix-database-file-is-not-a-database-error

usually related to filesystem corruption.
I would recommend to run chkdsk one more time while storagenode is stopped, because latest Windows cannot fix all errors for the one attempt, so it could find and fix more errors.

I’m glad that you find the guide useful to recover a database!

1 Like

Thank you for answering, it’s very nice! I did chkdsk d: /f 5 times and chkdsk d: /f /b once (it fixed something / restored), but it did not help, I had to create a folder called D, install a new node there, take the newly created files from the folder D and copy them to the main root disk D , started the node and rushed off)))

1 Like

I want to add more information. The next day I had to reboot the server, after turning it on I received a repeated error: failed create test_table: disk I/O error I ran the command once: chkdsk d: /f something got fixed and the node started.

Still, the problem remained. :frowning:
To preserve the node, what files can be deleted? ready for anything.

2023-02-05T23:12:14.297+0200 FATAL Unrecoverable error {“error”: “Error during preflight check for storagenode databases: preflight: database "satellites": failed create test_table: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:426\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:360\n\tmain.cmdRun:241\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “errorVerbose”: “Error during preflight check for storagenode databases: preflight: database "satellites": failed create test_table: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:426\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:360\n\tmain.cmdRun:241\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n\tmain.cmdRun:243\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}

There is still something wrong with your HDD. You should run another chkdsk with fix again.

Did you swap to different HDD or just move folder on the same physical disk drive?

1 Like

I tried to create a new package, a new node on the same disk in the folder, to take the new DB from there. I run it and get the following error:

2023-02-05T23:47:33.638+0200 FATAL Unrecoverable error {“error”: “Error starting master database on storagenode: database: bandwidth opening file "D:\\bandwidth.db" failed: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:331\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:308\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:283\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:250\n\tmain.cmdRun:193\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “errorVerbose”: “Error starting master database on storagenode: database: bandwidth opening file "D:\\bandwidth.db" failed: disk I/O error\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:331\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:308\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:283\n\tstorj.io/storj/storagenode/storagenodedb.OpenExisting:250\n\tmain.cmdRun:193\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n\tmain.cmdRun:195\n\tstorj.io/private/process.cleanup.func1.4:377\n\tstorj.io/private/process.cleanup.func1:395\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:92\n\tstorj.io/private/process.ExecWithCustomConfig:74\n\tstorj.io/private/process.Exec:64\n\tmain.(*service).Execute.func1:61\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}

PS A day ago I did install a new node there, take the newly created files from the folder D and copy them to the main root disk D, the next day the error happened again.

This error is only for physical hardware issue. Maybe the drive, maybe a cable. As proven by you changing the files but still with the same error. You must try a new physical hard drive, or run a major surface sector scan on the entire D: drive.

1 Like

What is the procedure to check this? I did a chkdsk d: /f /b but there were no errors on the drive itself. Maybe there are some other test commands?
Thx you!

Use CrystalDiskInfo to check for SMART errors on the physical disk drive - S.M.A.R.T. Information - Crystal Dew World [en]

Then run a tool for surface scan or surface test. I don’t have one to recommend but Google search showed these for free -

1 Like

If after the test the disk shows everything is OK, what files can be deleted without losing the node itself?

No one file should be deleted.
How this disk is connected? Does it have enough power?

Please check permissions also, the owner should be a SYSTEM (or СИСТЕМА, if you use the localized version), this user must have full privileges for all data inside the data location. If it is not, you may apply them recursively.

Victoria found a problem on the disk. That’s what I think you can do, does it transfer data or can you try to repair it with Victoria?