Database orders.db size 700MB - is it fixable?

Hi guys,
After a month of working, I had a fancy idea, --> let’s check storage databases sizes :scream:
And one of the files (orders.db) is 700MB big… I think it is too big comparing to other files…
Is it normal size for a node at around 2GB of capacity?

I checked logs and there is an error popping up that ordersdb is malformed.
This error in storagenode log doesn’t prevent storagenode from starting - maybe this is an error to raise as a bug?

I tried to use your fix of database

It is really slow … don’t know why it reads files very slow - like 1 MB / minute

I tried to modify last insert
from point 13

sqlite3 /storage/bandwidth.db “.read /storage/dump_all_notrans.sql”

i used it like this - using sqlite client and now it reads it in much faster - whole 700MB in less than a minute
(i think that it has some problems when BEGIN TRANSACTION is upper case.

sqlite3 /storagepath/orders.db
begin transaction;
.read /storagepath/dump_all_notrans.sql
end transaction;

but in the end it reports some errors at some point, trying to do some inserts with null values
and then it reports error trying to create UNIQUE INDEX on unsent_order table, one of the last lines:

CREATE UNIQUE INDEX idx_orders ON unsent_order(satellite_id, serial_number);

Is this database really important? Can it be somehow repaired? dropped and recreated empty?
Or should i start thinking about dropping the node?

Thanx if anyone can help…

You don’t want to use a transaction for this recovery. If a transaction runs into any errors, it will roll back everything. This is why step 11 in the instructions specifically removes the lines that open a transaction, commit and end the transaction. That way all inserts are processed right away and any inserts that result in an error are simply skipped. If it’s slow, you can try using tmpfs so that this operation can be done in RAM.

If that doesn’t work, you could start with an empty db. But you should try fixing it first, since that would cause the least data loss.

To start with an empty db. Rename the existing one (so you have a backup).

sqlite3 orders.db
CREATE TABLE unsent_order (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,

                                                order_limit_serialized BLOB      NOT NULL, -- serialized pb.OrderLimit
                                                order_serialized       BLOB      NOT NULL, -- serialized pb.Order
                                                order_limit_expiration TIMESTAMP NOT NULL, -- when is the deadline for sending it

                                                uplink_cert_id INTEGER NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
CREATE TABLE order_archive_ (
                                                satellite_id  BLOB NOT NULL,
                                                serial_number BLOB NOT NULL,

                                                order_limit_serialized BLOB NOT NULL,
                                                order_serialized       BLOB NOT NULL,

                                                uplink_cert_id INTEGER NOT NULL,

                                                status      INTEGER   NOT NULL,
                                                archived_at TIMESTAMP NOT NULL,

                                                FOREIGN KEY(uplink_cert_id) REFERENCES certificate(cert_id)
CREATE UNIQUE INDEX idx_orders ON unsent_order(satellite_id, serial_number);
CREATE TABLE versions (version int, commited_at text);
CREATE INDEX idx_order_archived_at ON order_archive_(archived_at);
1 Like

I tried reading in the whole dumped file but it creates the UNIQUE INDEX constraint at the end and fails because of some duplicates (fails to create the index). And without this index storage doesn’t want to start.
So I was thinking like this… (per your suggestion)

  1. Create empty DBs (from above)
  2. Create constraints
  1. attempt to insert all those dumped inserts.

If any insert fails, than … well tuff life… hopefully i won’t get DQ’ed

That should work. You can also edit the file and move the creation of the index to the top, above the inserts, but below the create table. Then just read the file into the db like before. As long as you don’t use a transactions, some inserts will then fail, but you can ignore those. That should work just fine.

1 Like

Errors were just last 10 or so inserts at the end of the file. and some duplicates in order_archive_ table

Thanx for the tip about tmpfs! that really sped up the inserts! (me beginner at linux)

Let’s cross the fingers and restart the node.

1 Like

Good luck! And please report back how it goes.

1 Like

Well node started…
only one error after start (didn’t perform restart or docker rm / run) - should I do rm & run?
ERROR piecestore:cache error getting current space used calculation: {“error”: “context canceled”}

Wondering if it is bad :frowning:

No, start should be just fine.

That error may be a time out, but it’s probably a good idea to stop the node and check the integrity of piece_spaced_used.db as well. While you’re at it, you might as well check all db’s. Do you know what might have caused the corruption in the first place?

It is the node that was once already almost dead… (it had some other DBs corrupted) - it took me whole night to correct it, (greenie me then) hhaha :wink: now I am “error? pffft” :wink:

Today I decided to restart linux and shut down the node, then performed “shutdown now” - possibly it didn’t had time to dump all the cache on the disk… - now i found out that it had cache enabled and turned it off from this disk.

will check other DBs before restart then :scream:

After checking integrity of all DBs, started and it works! “phew!” now need to place some tools for checks for errors and mailing those somewhere :scream:

@BrightSilence THANK YOU FOR HELP!


Another node saved! :smiley:
Glad you didn’t give up on it.