Ordersdb error: database is locked

The size difference isnt really worth wasting your time running and possibly screwing up your node in the process.

The size of the file is not the only change. The database itself contains left over data from deleted rows… the vacuum function defragments the database file as well as reorganizes the data for more efficient disk space usage. So after a vacuum operation, the database should open faster and operate with fewer errors.

This is true but telling people to do this that dont know what there doing is going to equal in a disaster.

We are here to try to find a solution to solve “database is locked”. Each member of this forum can propose any ideas and suggestions, it’s Engineering Discussions. Please be focused on our topic.

1 Like

But isnt the issue because its reading from so many databases, wouldnt it be more effective to have less databases and combine it into maybe 4 databases to save I/O

We have had 1 database for everything and got “database is locked” much more often.
The SQLite just not suitable for parallel access…

1 Like

Is there a way to set a timeout to do 1 database at a time instead of all of them at once.

1 Like

This is not how it’s working. You can read there for details: https://www.sqlite.org/lockingv3.html

3 Likes

Yes, pick a faster drive :smiley:
If serious - just restart the storagenode and let it run, it will be fixed itself.

1 Like

What does filefrag orders.db say? I had a lot of these errors until I defragmented it.

Any utility I can use on windows to check?

A quick Google search gave me this

Hi. I noticed in the dashboard today that my audit checks on all satellites have dropped by about 2% and more. I looked in the log and found the following:

2020-04-21T18:38:34.747Z ERROR piecestore failed to add order {“error”: “ordersdb error: database is locked”, “errorVerbose”: “ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:53\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:714\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:443\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:215\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:987\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:105\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:56\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:93\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

log

You can see that this database locked suddenly appears from time to time, sometimes again, not again and since then it has not been again.

What can I do about it or what can be the cause?

1 Like

orders.db doesn’t affect audits. But affects payout.
How is your HDD connected?

thanks for moving.

Ah okay, both stupid ^^
My HDD (Seagate Expansion) is connected via USB3.

I’ve seen that too! Assumed it was because of super high disk I/O utilization on my USB 3.0 connected HDD. Although, each time I’ve seen it, it does eventually complete the task. From what I’ve seen, I believe it mainly occurs when trying to “rollup” the “Saltlake orders,” which again since there’s SOOOOO much data coming in from that satellite, I just figured my HDD I/O was so high that it was having trouble finding a “point in time” where the DB wasn’t being utilized to finish the task.

Perhaps my assumption is totally incorrect though.

1 Like

Thanks for sharing your observation. “Nice” to hear that I am not alone with the problem.

I will continue to observe this, I noticed it for the first time today.
But there can’t be a DQ for that? First of all, that’s the most important ^^

No, it can’t fail them when the orders.db is locked.
But it can, if it fail audits. Search for failed and GET_AUDIT.

Btw, my nodes do have the ordersdb error: database is locked as well

1 Like

I’m on the latest version. Tried vacuum and integrity checks. All done no integrity error. However the error messages remain after 5h of uptime.

2020-06-10T18:59:48.871Z ERROR piecestore failed to add order {“error”: “ordersdb error: database is locked”, “errorVerbose”: “ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:657\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload.func5:304\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:320\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:996\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

I am starting to see the same. Running on Docker on Linux, latest tag. orders, bandwidth, and used_serials are constantly seeing lock timeouts. Vacuuming all of them solves the problem for about 30 minutes then the problem resurfaces. This node has been running for months without issue then all of a sudden this is happening. I checked my logs and it started about a week ago.