Database issue?

Hello Alexey,

I see you are a nice guy which helping all users.
I have one trouble with my node1. When i make sudo docker stop/start storagenode it never remember Disk storage space. All the time there are the same statistics. After some days i see some ammount in trash but after reset a node it’s 0B. The same situation is with Online time. Is it database trouble? Is it good moment to make thread about it? I haven’t seen similiar problem on Storj forum.

I would be thankful for your reply.

Best regards.

Hi!
Thank you for your warm words!

It’s worth to check your databases: How to fix a "database disk image is malformed" – Storj
And also please, post your docker run command between two new lines with three backticks (like these: ```). You can hide your personal info.
If you running Windows GUI node, then please, check your disk for errors (the node should be stopped).

1 Like

Hello.
I didn’t think it would be a public post, I would write my greetings a bit differently :smiley:

I have checked my databases but i think it seems to be good.

Sorry that i don’t have copy of database check. I have only screen shot. I made it some days before.

Here’s the docker run command below:

sudo docker run -d --restart unless-stopped --stop-timeout 300 \
    --net host \
    -e WALLET="X" \
    -e EMAIL="X" \
    -e ADDRESS="X:XXXX" \
    -e STORAGE="X" \
    --mount type=bind,source="/mnt/storj/identity/storagenode/",destination=/app/identity \
    --mount type=bind,source="/mnt/storj/storage",destination=/app/config \
    --name storagenode storjlabs/storagenode:latest

I’ll move any private message to the public section unless it contains a private information.
Usually every single question is asked before and the answer could help others.

The databases seems ok. Your command doesn’t contain any potential problems.
Please, search your logs for errors:

docker logs storagenode 2>&1 | grep ERROR | tail
[opc@storj-bridge ~]$ sudo docker logs storagenode 2>&1 | grep ERROR | tail
2021-04-11T09:09:01.028Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "LAEQJZMRR5P2GERLVRKWCW2C3HB2ZUWINW755LHAC6TKCK5PROAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-04-11T09:19:27.762Z        ERROR   piecestore      failed to add bandwidth usage   {"error": "bandwidthdb error: database is locked", "errorVerbose": "bandwidthdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:683\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:413\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1025\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:29\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2021-04-11T09:59:08.569Z        ERROR   piecestore      failed to add bandwidth usage   {"error": "bandwidthdb error: database is locked", "errorVerbose": "bandwidthdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:683\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:413\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1025\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:29\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2021-04-11T10:09:09.722Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "PBWJMPSUETQRZT4GZ36UAJSFGSJ36R4SSPAM6JXS34WMX3YB36OA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-04-11T10:09:10.163Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "LAEQJZMRR5P2GERLVRKWCW2C3HB2ZUWINW755LHAC6TKCK5PROAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-04-11T10:52:52.639Z        ERROR   piecestore      failed to add bandwidth usage   {"error": "bandwidthdb error: database is locked", "errorVerbose": "bandwidthdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:683\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:413\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1025\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:29\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2021-04-11T11:09:01.563Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "PBWJMPSUETQRZT4GZ36UAJSFGSJ36R4SSPAM6JXS34WMX3YB36OA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-04-11T11:09:02.145Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "LAEQJZMRR5P2GERLVRKWCW2C3HB2ZUWINW755LHAC6TKCK5PROAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-04-11T12:08:58.418Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "PBWJMPSUETQRZT4GZ36UAJSFGSJ36R4SSPAM6JXS34WMX3YB36OA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-04-11T12:09:00.388Z        ERROR   collector       unable to delete piece  {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "LAEQJZMRR5P2GERLVRKWCW2C3HB2ZUWINW755LHAC6TKCK5PROAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

How is your disk connected?
What is the filesystem?
What is the type of the disk? Is it SMR?

My disk is connected directly to Synology NAS. Main server is on another location. Synology is server for storage. Both servers are connected through SSHFS because of my LTE ISP gives 2-3 NAT’s. I know it isn’t the best solution for Storj but i don’t have any better option. Now I’m waiting for new faster and more “server friendly” ethernet.
There’s no any RAID group yet with him. Filesystem is BTRFS.
It’s WD Desktop Elements 14TB. I think it’s CMR drive because of very big capacity.

The Synology fixed most of problems with BTRFS in their version as far as I know.
The problem seems more related to SSHFS
Why you do not run the node on Synology?

And you have this problem for a long time: Bandwidth 0GB, but node is working correctly
Please, do not use SSHFS