Database bandwidthdb is locked

failed to add bandwidth usage  {"error": "bandwidthdb: database is locked"

What could be the reason for this? Last time I removed the container and restarted was 30 hrs ago.

usually happens with high disk latency…

so usually that would mean that your hdd is over worked or there may be an issue with your drive…

or atleast thats how i remember it… saw it quite recently myself also due to an overload, pretty sure it went away again when the hdd had less work to do.

Ah I see. The drive is very busy at the moment so that could explain it. I’ll check when there is less load again.

1 Like

not sure if there are any long term effects of this error tho…
i think bandwidth.db is part of what keeps track of usage and thus indirectly payment, but i don’t have much clue about how that works exactly.

if this happens often or you need it to run like this, it might be wise to reduce the iops for the other services that runs on the hdd… i assume you are running more than one…

its not a good sign that stuff doesn’t get “written / is unable to be access” because the disk is to slow…
stuff like caches might also help mitigate it if this is more than a one time deal

I have seen this error more often lately too. However, my DBs are on an idle ssd so overload can’t be the problem.
Just today 3 times on 2 nodes.

Hmmm that’s odd… to be fair tho, i haven’t really kept a close eye on it…

My proxmox died, and then i couldn’t get my l2arc / slog ssd drivers installed because it a huge headache and wasn’t sure my proxmox would reboot…
so my system was really overloaded and i was checking my logs and the bandwidth.db locked error kept showing up, basically all the time…

so i turned all my crazy sync=always stuff off and it basically just went away…
i can’t remember ever seeing it without it being caused by latency or such…

i should really get my logging setup again lol… at present its just throwing them out aside from like a hour back…
so cannot even really check to see if i got the same issue happening…
i don’t see it if i go into the live logs tho… when the system is overloaded it happens like all the time… just spamming the log.

I am seeing this error very frequently. This has started happing since I changed by internet service provider and have a faster connection now. This error is happening multiple times in an hr

2022-07-26T15:59:11.716Z ERROR piecestore failed to add bandwidth usage {“Process”: “storagenode”, “error”: “bandwidthdb: database is locked”, “errorVerbose”: “bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:723\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:435\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52”}
2022-07-26T15:59:20.695Z ERROR piecestore failed to add bandwidth usage {“Process”: “storagenode”, “error”: “bandwidthdb: database is locked”, “errorVerbose”: “bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:723\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:437\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52”}
2022-07-26T15:59:30.126Z ERROR piecestore failed to add bandwidth usage {“Process”: “storagenode”, “error”: “bandwidthdb: database is locked”, “errorVerbose”: “bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:723\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func6:665\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:686\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52”}

I guess that’s mostly the case. So you want to check if your current workload is higher than normal, which you indicate as you have a faster internet connection now. Other pain points could be a SMR drive or otherwise slow drive (less than 7200 rpm).

I make $10 a month and I am into 10th month on a 4TB drive. Not sure if its feasible to upgrade to SSD or even 7200 HDD.
If upgrading will increase earnings then may be but at $10 I would just stick to what I have.
I am also thinking of quiting alltogether. This is not even close to what I expected.

Does these error have any impact on audit?

There is this earnings estimator, which gives you an idea what realistic earning estimates are: Realistic earnings estimator

I would say your earnings are within what you can expect. I don’t think you need an upgrade if the bandwidth errors are the only ones you see. If you see many uploads or downloads cancelled in your logs that’s what will hurt your earnings.

There has to be something I am missing. I dont think node operators are here to make $10 a month or even 4x of that.

$40 per month or even $10 for doing nothing is not too bad I think. I mean of course more is always better but I would be interested to learn what did you expect to earn?

I have this
storage2.max-concurrent-requests:
configured at 8 but still see the bandwidth errors. I restarted the container.

unless if it a big node like 20TB or if its continually restarting and thus running the filewalker over and over, then it shouldn’t happen… bandwidth.db locked could also be permissions related or maybe other stuff…

it just usually isn’t.

this is the hdd. This is attached to QNAP and overall health of node and hdd is NOT showing any issues

I am getting this error too and my db is on my windows C drive whish is not horribly busy. This started happening after a system crash. It seems to have cratered my earnings as I made 0 for the last month and I moved a lot of data in and out. If I cant fix it I will have to abandon the node.

How do I force an unlock on this table?

If your data is on the system drive too, it’s kind of expected. Is your system drive SSD?

The statistics for September is not showing for everyone:

Your node can also work without databases and it will still be rewarded, but the Stat on your dashboard will be wrong.

I would like to suggest to check your databases and fix them, they may be corrupted after the system crash:

If databases are corrupted and you do not want to spend time to recover malformed, you may re-create them:

In a latter case your Stat and historic data will be lost though, but it doesn’t affect earnings.

Actually I was incorrect, my database is on the same drive as my data store. My C drive, while a lot smaller is much faster. Is there a way in the config file to tell storj to use the databases in a different location?

1 Like