Bandwidth.db database has 5GB

My bandwidth.db size is 5.1 GB, is it too big?

The node seems to have problems, I can see in the logs:
ERROR piecestore failed to add bandwidth usage {"Process": "storagenode", "error": "bandwidthdb: database is locked", "errorVerbose": "bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:731\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func6:670\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:694\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}

I have tried to fix a “database disk image is malformed” but it takes about 2 months to fix it. It processed about 10 Mb in 2 hours, my db is 5GB.
Any idea what to do?

If you have errors after check db you can repair it using ram. But you need at least 11gb free ram.

It will be more faster.
If you haven’t errors after check, try to move db to ssd disk.

Is it possible to check the database while the node is running?

Nope.

i find the DB locked usually happens either because of lack of CPU compute or disk IO

Excuse me, I wanted to ask you something. I have seen this post and I cannot understand a thing. I have 9 nodes, the oldest 14 tb, full 11tb, which is over 30 months has no larger db than 600MB. The bandwidth is 580MB. The others are much inferior. Never had problems and access to 99.9% UPtime and 0 errors in the log. How come other users have DBs over 5GB? I also saw other forum posts about it. I don’t explain this. I look forward to your info on this. Thank you

1 Like

I have two 14 tb nodes, one has 79mb db size, the other is 5gb size

hmmm thats a good question…

i have in a long time had a node that has been acting up, but never really knew what was wrong with it… it seems it also has a 5GB while a similar node created at the same time with similar size, has a bandwidth.db of 57MB

no idea why this would be or happen… but does kinda make me think this is not normal…
i suppose the question is then what to do about it…

is a 5GB node a malformed DB…
seems like a rather drastic deformation lol

@Alexey @BrightSilence why would a database become so big when other nodes have normal sized databases, and my database works fine… does complain a bit about db locked when i first start the node, but after that, it doesn’t show up at all…

is it just filled with junk or is there something useful in it…
5GB is a lot of bandwidth… :smiley:

isn’t it just used to verify the numbers the satellites gives regarding payouts?

Maybe you can run a vacuum command on a copy of the db file and check if the file is smaller afterwards. There might be something wrong with the auto-vacuum.

https://www.sqlite.org/lang_vacuum.html

Without more context I would have to guess. But since SQLite doesn’t automatically clean up freed up space and locks can cause bandwidth rollups to fail. It might be that this db file has or had many individual bandwidth records that were never rolled up i to hourly aggregates. Vacuum could solve the size issue indeed, by actually releasing that freed up space. But I’m not sure if that would also resolve the locks issue. If it doesn’t the db might inflate again.

1 Like

an interesting note is that it’s actually my oldest remaining node, after i did GE on my 17TB one since it had a nice held amount.

non of the other nodes seems to have this issue and i don’t think the 17TB one did either, this node has been behaving kinda weird for a while.
but my system is pretty beefy so the hardware makes up for it i guess.

all other bandwidth.db file are 50MB or less with the max node sizes being in the 7 TB range.

i guess ill try to vacuum the db manually, see if that fixes the issue.

had to install sqlite3
then ran this

for i in ./bandwidth.db; do
  echo "$i"
  sqlite3 "$i" 'VACUUM;'
done

reduced the size from 5GB to 4.6GB
so did something…
tried running it again without any changes to the bandwidth.db size.

so next option is just deleting it i guess?
can i do that now, or should i wait until after the new month begins.

You may delete it, but you will lose a usage history:

1 Like

Try running the node for a while after vacuuming. It’s possible rollups never worked because of lock contention. Hopefully the vacuum fixed that. Then next day, try vacuuming again and see if that fixes it. If possible in your setup, it might also help to move db’s to SSD.

thats a good idea.

my setup manages that by itself, so stuff like database loads will be SSD and RAM allocated.
vacuuming the 5GB file took only a few minutes, which is a pretty good result. afaik

So it is safe to delete bandwidth.db? I am not interested in history stats, but I want to be sure that the node will work fine.

Yes, all db’s are non-vital and can be recreated using the instructions Alexey linked. But you will lose historic stats. In the case of bandwidth.db it’s the data used for the bandwidth graph on the dashboard

@daniweb I saw you deleted your post. I assume you figured out to follow the instructions. But let us know if you need help to get your node working again.

1 Like

Yes, I have deleted bandwidth.db and the node didn’t want to start anymore. Then I have read again the @Alexey post and fixed the problem. I have backed up the databases, deleted all databases, start the node again, stop the node, copied the databases (except bandwidth.db) back to storage folder. The node started again.

4 Likes