Failed to add bandwidth usage

Hello, I’m starting to get a lots of these errors and not sure why/how to correct them so after any help please. Have a RPi4 with 4TB HDD and been running since December

2022-03-09T12:22:06.684Z ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database is locked, errorVerbose: bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:722\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:434\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}
2022-03-09T12:22:08.858Z ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database is locked, errorVerbose: bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:722\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:348\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}
2022-03-09T12:22:12.149Z ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database is locked, errorVerbose: bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:722\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:434\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}
2022-03-09T12:22:16.694Z ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database is locked, errorVerbose: bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:722\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:434\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}
2022-03-09T12:22:22.159Z ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database is locked, errorVerbose: bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:722\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:434\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}
2022-03-09T12:22:27.360Z ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database is locked, errorVerbose: bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:722\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:434\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}

@tre4orbragg did you search similar reports from the forum? One that looks very similar is Database bandwidthdb is locked - #2 by SGC or Weird node behaviour - #16 by YourHelper1

Yeah but those didn’t have much clear direction in what to do. I’ve checked the database and now updated the config.yaml for the max connections to 20 and will see how that goes

As said i am facing a similar problem. Main reason i think it is because i am using a SMR disk (4TB like you btw) which can’t handle the load. You can check by yourself:

  1. if this happens when you accept lot of concurrent requests.
  2. how big is the problem, i mean the percentage of the errors you are getting. If those were all the errors for the day, then your node is more than fine. If the problem appears once in 1000 or more logs then you should just forget it. Analyzing the data from logs is nice but overthinking every error will probably do nothing more than increase your blood pressure.

Using an SMR disk seem like a classic rookie mistake (like i did of course) as everyone buys the cheapest disk to provide more storage but forgets the bandwidth which is the most important here. A SMR disk though can maybe bring more benefits in the long run, because by the time you have filled it (or almost filled it), you will be having a lot of and hopefully not so many writes, and the disk will have a better performance and of course more space for his price, compared to a more expensive CMR.

What you can do:

I currently have a limit for 7 concurrent requests (you can specify that on the config.yaml file). This of course mean less payouts but this is how much my disk can handle, otherwise ram gets filled up for no reason and then i get those errors that you mentioned…

Hope this helped, but you also have to thank @michaln for tagging the right person in the right problem :wink:

Thanks, there have been a lot more errors but looks like a bit less by reducing the concurrent connections so playing with that to find a good balance with load and actually getting traffic still!

1 Like