Database bandwidthdb is locked

failed to add bandwidth usage  {"error": "bandwidthdb: database is locked"

What could be the reason for this? Last time I removed the container and restarted was 30 hrs ago.

usually happens with high disk latency…

so usually that would mean that your hdd is over worked or there may be an issue with your drive…

or atleast thats how i remember it… saw it quite recently myself also due to an overload, pretty sure it went away again when the hdd had less work to do.

Ah I see. The drive is very busy at the moment so that could explain it. I’ll check when there is less load again.

1 Like

not sure if there are any long term effects of this error tho…
i think bandwidth.db is part of what keeps track of usage and thus indirectly payment, but i don’t have much clue about how that works exactly.

if this happens often or you need it to run like this, it might be wise to reduce the iops for the other services that runs on the hdd… i assume you are running more than one…

its not a good sign that stuff doesn’t get “written / is unable to be access” because the disk is to slow…
stuff like caches might also help mitigate it if this is more than a one time deal

I have seen this error more often lately too. However, my DBs are on an idle ssd so overload can’t be the problem.
Just today 3 times on 2 nodes.

Hmmm that’s odd… to be fair tho, i haven’t really kept a close eye on it…

My proxmox died, and then i couldn’t get my l2arc / slog ssd drivers installed because it a huge headache and wasn’t sure my proxmox would reboot…
so my system was really overloaded and i was checking my logs and the bandwidth.db locked error kept showing up, basically all the time…

so i turned all my crazy sync=always stuff off and it basically just went away…
i can’t remember ever seeing it without it being caused by latency or such…

i should really get my logging setup again lol… at present its just throwing them out aside from like a hour back…
so cannot even really check to see if i got the same issue happening…
i don’t see it if i go into the live logs tho… when the system is overloaded it happens like all the time… just spamming the log.