Database disk is malformed

Hi. My node was updated 6 hours ago and the docker container never start. In the log it now says:

Error: Error creating tables for master database on storagenode: migrate: creating version table failed: migrate: database disk image is malformed
storj.io/storj/private/migrate.(*Migration).Run:150
storj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:285
main.cmdRun:188
storj.io/storj/pkg/process.cleanup.func1.2:312
storj.io/storj/pkg/process.cleanup.func1:330
github.com/spf13/cobra.(*Command).execute:826
github.com/spf13/cobra.(*Command).ExecuteC:914
github.com/spf13/cobra.(*Command).Execute:864
storj.io/storj/pkg/process.ExecWithCustomConfig:84
storj.io/storj/pkg/process.ExecCustomDebug:66
main.main:328
runtime.main:203

Using unraid V:6.7.2
Now it’s going to take me a few days to load a new database!
What is going on?

Anyone know how to use tmpfs?

Have upgraded unraid to it’s last versons now. Really need help to use tmpfs so that i can speed things up.

This is described on the docker documentation site: https://docs.docker.com/storage/tmpfs/
For short:

docker run -it --rm --mount type=bind,source=/mnt/user/Storj/storagenode/storage,destination=/data --mount type=tmpfs,destination=/ramdisk sstc/sqlite3 sh

In the instruction you will unload data to the /ramdisk folder instead of /data and load the data from it too.
However, you need to have a double size of free RAM to be able to unload and load the whole dump of database.

Ah ok. Now i have tried using tmpfs and it was not faster :\

did you place both - database and dump on the /ramdisk?

I’am not that good with commands, Do i need to change the commands in the instructions?

If… can you show me how?

I have run the instructions on the small files and this was the outcome:

/data # sqlite3 /storage/notifications.db “.read /storage/dump_all_notrans.sql”
Error: near line 14: cannot commit - no transaction is active

/data # sqlite3 /storage/orders.db “.read /storage/dump_all_notrans.sql”
Error: near line 9: cannot commit - no transaction is active

/data # sqlite3 /storage/piece_expiration.db “.read /storage/dump_all_notrans.sql”
Error: near line 152: cannot commit - no transaction is active

/data # sqlite3 /storage/piece_spaced_used.db “.read /storage/dump_all_notrans.sql”
Error: near line 19: cannot commit - no transaction is active

/data # sqlite3 /storage/used_serial.db “.read /storage/dump_all_notrans.sql”
Error: near line 108216: cannot commit - no transaction is active

You should execute this instruction for each malformed database. Please, replace the bandwidth.db from the example to each such database and proceed all points.
Please, do not try to load a data from one database to all others - you will break everything. For each database the /storage/dump_all_notrans.sql should be own.

You can ignore those errors.

If you loaded the right file to the right database, then try to start the storagenode.

I don’t think i understand that quite.
I have execute the instruction for each malformed databases. When i was done with one database i deleted the dump_all_notrans.sql and continued to the next database. Tried to start the node after but it just stops right away.
could you mabye take a look over TeamVeiwer?

Please, post the new error.

It’s still loading new bandwith database

/data # sqlite3 /storage/bandwidth.db “.read /storage/dump_all_notrans.sql”
Error: near line 66650: cannot commit - no transaction is active

Now i tried to load pieceinfo database but it take one second and this pop up:

/data # sqlite3 /storage/pieceinfo.db “.read /storage/dump_all_notrans.sql”
Error: near line 2: cannot commit - no transaction is active

and the file size is 0kb

Here is the log when i trying to start the node:

Error: Error creating tables for master database on storagenode: migrate: no such table: main.order_archive_
storj.io/storj/private/migrate.SQL.Run:251
storj.io/storj/private/migrate.(*Migration).Run.func1:171
storj.io/storj/private/dbutil/txutil.withTxOnce:67
storj.io/storj/private/dbutil/txutil.WithTx:36
storj.io/storj/private/migrate.(*Migration).Run:170
storj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:285
main.cmdRun:188
storj.io/storj/pkg/process.cleanup.func1.2:312
storj.io/storj/pkg/process.cleanup.func1:330
github.com/spf13/cobra.(*Command).execute:826
github.com/spf13/cobra.(*Command).ExecuteC:914
github.com/spf13/cobra.(*Command).Execute:864
storj.io/storj/pkg/process.ExecWithCustomConfig:84
storj.io/storj/pkg/process.ExecCustomDebug:66
main.main:328
runtime.main:203

  1. Stop the storagenode
  2. execute with sqlite3 (local or docker version as in the article above)
sqlite3 pieceinfo.db
CREATE TABLE pieceinfo_ (
                                                satellite_id     BLOB      NOT NULL,
                                                piece_id         BLOB      NOT NULL,
                                                piece_size       BIGINT    NOT NULL,
                                                piece_expiration TIMESTAMP,

                                                order_limit       BLOB    NOT NULL,
                                                uplink_piece_hash BLOB    NOT NULL,
                                                uplink_cert_id    INTEGER NOT NULL,

                                                deletion_failed_at TIMESTAMP,
                                                piece_creation TIMESTAMP NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
                                        );
CREATE UNIQUE INDEX pk_pieceinfo_ ON pieceinfo_(satellite_id, piece_id);
CREATE INDEX idx_pieceinfo__expiration ON pieceinfo_(piece_expiration) WHERE piece_expiration IS NOT NULL;
.exit
sqlite3 orders.db
CREATE TABLE unsent_order (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,

                                                order_limit_serialized BLOB      NOT NULL, -- serialized pb.OrderLimit
                                                order_serialized       BLOB      NOT NULL, -- serialized pb.Order
                                                order_limit_expiration TIMESTAMP NOT NULL, -- when is the deadline for sending it

                                                uplink_cert_id INTEGER NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
                                        );
CREATE TABLE order_archive_ (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,

                                                order_limit_serialized BLOB NOT NULL,
                                                order_serialized       BLOB NOT NULL,

                                                uplink_cert_id INTEGER NOT NULL,

                                                status      INTEGER   NOT NULL,
                                                archived_at TIMESTAMP NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
                                        );
CREATE UNIQUE INDEX idx_orders ON unsent_order(satellite_id, serial_number);
CREATE TABLE versions (version int, commited_at text);
CREATE INDEX idx_order_archived_at ON order_archive_(archived_at);
.exit
  1. Try to start the storagenode
  2. Look into logs

The log shows this

Error: Error creating tables for master database on storagenode: migrate: creating version table failed: migrate: database disk image is malformed
storj.io/storj/private/migrate.(*Migration).Run:150
storj.io/storj/storagenode/storagenodedb.(*DB).CreateTables:285
main.cmdRun:188
storj.io/storj/pkg/process.cleanup.func1.2:312
storj.io/storj/pkg/process.cleanup.func1:330
github.com/spf13/cobra.(*Command).execute:826
github.com/spf13/cobra.(*Command).ExecuteC:914
github.com/spf13/cobra.(*Command).Execute:864
storj.io/storj/pkg/process.ExecWithCustomConfig:84
storj.io/storj/pkg/process.ExecCustomDebug:66
main.main:328
runtime.main:203

Did i something wrong?:

My node has been offline since last update (almost 4 days now)
Won’t it be disqualified soon?

No.
Please, check the integrity for all other databases.

Btw, when i check for errors, it’s only bandwith and orders that is malformed.
Should I just focus on the two?