No used bandwidth since september '21

Hey you :slight_smile: !
Could not find anything similar here in the forums, so here is my problem:
Since the end of september my dashboard does not show up any used bandwidth and with that, the payouts reduced to a minimum )which is only made up of the used space).
But in contrast to that the used storage is still increasing*, but - as mentioned - there is no shown or payed used bandwidth.
The satellites are always up to more than 95% (or even more than 98%, but error fixing costed me some % :’) ).
I already tried to transfer my node to an other PC, but since it was Ubuntu and my node runs on windows i did not manage to succesfully transfer it and reversed it (with no changes with mentioned error)
Here are the corresponding pictures of bandwidth:

bandwidth
(This part of my dashboard is the same for the past two months)

Here are some errors shown in log:

2021-11-05T08:10:56.861+0100 ERROR bandwidth Could not rollup bandwidth usage {error: bandwidthdb: database disk image is malformed, errorVerbose: bandwidthdb: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:324\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57}
2021-11-05T08:10:56.865+0100 ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database disk image is malformed, errorVerbose: bandwidthdb: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:711\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:432\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:104\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:97\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}
2021-11-05T08:10:56.865+0100 INFO piecestore uploaded {Piece ID: GUDVUYJY4RYNPC63YJXCEPZZ7KIZO3KOLLC3RCI443PVFO2E6T6A, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT_REPAIR, Size: 2048}
2021-11-05T08:10:56.994+0100 ERROR collector unable to delete piece {Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Piece ID: I66RUSPYIMJCDA65JFAPLF3VWZ2HDXFTZ2TPV6BIX5Q72QK2HJFQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57}
2021-11-05T08:10:57.092+0100 ERROR collector unable to delete piece {Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Piece ID: B2TKIKNRLRUFA3CZFTCP6E7IKAD63HTRO5WKWMGETJKJWOR7UC2A, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57}
2021-11-05T08:10:57.210+0100 ERROR collector unable to delete piece {Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Piece ID: S5VZ5YP56NKB6PLZNVSY6ISBGW7IEXV3YQT6LJAYLNX3UW446VZQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57}
2021-11-05T08:10:57.577+0100 ERROR piecestore failed to add bandwidth usage {error: bandwidthdb: database disk image is malformed, errorVerbose: bandwidthdb: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:711\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:430\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:104\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:97\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52}

*Since this problem, the used storage increased from about 500GB to more than 600GB

Hi!

This is very weird that you did not found 50+ answers: Search results for 'database disk image is malformed' - Storj Community Forum (official) :slight_smile:
Please, use this guide https://support.storj.io/hc/en-us/articles/360029309111-How-to-fix-a-database-disk-image-is-malformed-

3 Likes

Thank you very much! I dont know how i did not find that solution :see_no_evil:

But i keep getting the error “unable to delete piece” concerning the satellites, can i ignore this error?

And now the node crashes after only short time running with:

2021-11-07T14:15:00.987+0100 ERROR piecestore:cache error getting current used space: {“error”: “CreateFile R:\blobs\ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa\rm/virpyxde4krr3nryul6j75xalmewdgrwx6qzv6qz2us6syt72q.sj1: Die Datei oder das Verzeichnis ist beschädigt und nicht lesbar.”, “errorVerbose”: “CreateFile R:\blobs\ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa\rm/virpyxde4krr3nryul6j75xalmewdgrwx6qzv6qz2us6syt72q.sj1: Die Datei oder das Verzeichnis ist beschädigt und nicht lesbar.\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:788\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:725\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:685\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:284\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:497\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:662\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:54\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-11-07T14:15:00.987+0100 ERROR services unexpected shutdown of a runner {“name”: “piecestore:cache”, “error”: “CreateFile R:\blobs\ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa\rm/virpyxde4krr3nryul6j75xalmewdgrwx6qzv6qz2us6syt72q.sj1: Die Datei oder das Verzeichnis ist beschädigt und nicht lesbar.”, “errorVerbose”: “CreateFile R:\blobs\ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa\rm/virpyxde4krr3nryul6j75xalmewdgrwx6qzv6qz2us6syt72q.sj1: Die Datei oder das Verzeichnis ist beschädigt und nicht lesbar.\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:788\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:725\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:685\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:284\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:497\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:662\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:54\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-11-07T14:15:00.989+0100 INFO piecestore downloaded {“Piece ID”: “C62W5WAPTZKVHGHQJQD56LRJBX66UIJIMLAGZKAGC4ZUCZUJJM6A”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2021-11-07T14:15:00.990+0100 INFO piecestore upload canceled {“Piece ID”: “KPYV6RA4J22AAT66Q6VJSI6VC4V3N2HO3VL2TZ37I3ESME526SUA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 0}
2021-11-07T14:15:01.046+0100 INFO piecestore downloaded {“Piece ID”: “FFPA4WPG4GEOFEUXALQKMHT6ZXQVXBOE7WFPTZXCQXOSC3DLYZOQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2021-11-07T14:15:01.060+0100 INFO piecestore upload canceled {“Piece ID”: “RDKZ3YB4PBGTPOARF4PBJOXVW7HKV3ZWCRC55F6WKSHHZ5UZ7VMA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 532480}
2021-11-07T14:15:01.083+0100 INFO piecestore downloaded {“Piece ID”: “3LFEDJFWLTETWKDUVQK2FS24EHDN4ZPCWWLTXUXYDOSNB5OBGGDQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2021-11-07T14:15:01.455+0100 FATAL Unrecoverable error {“error”: “CreateFile R:\blobs\ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa\rm/virpyxde4krr3nryul6j75xalmewdgrwx6qzv6qz2us6syt72q.sj1: Die Datei oder das Verzeichnis ist beschädigt und nicht lesbar.”, “errorVerbose”: “CreateFile R:\blobs\ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa\rm/virpyxde4krr3nryul6j75xalmewdgrwx6qzv6qz2us6syt72q.sj1: Die Datei oder das Verzeichnis ist beschädigt und nicht lesbar.\n\tstorj.io/storj/storage/filestore.walkNamespaceWithPrefix:788\n\tstorj.io/storj/storage/filestore.(*Dir).walkNamespaceInPath:725\n\tstorj.io/storj/storage/filestore.(*Dir).WalkNamespace:685\n\tstorj.io/storj/storage/filestore.(*blobStore).WalkNamespace:284\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePieces:497\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:662\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:54\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

I’m getting the feeling that my HDD just dont want to work for me anymore :smiley:

You need to stop the node either from the Services applet or from the elevated (as an Administrator) PowerShell:

Stop-Service storagenode

and run the chkdsk /f R: command from the elevated cmd.exe or PowerShell.
You may need to run it several times until no errors would be produced.
When all errors would be fixed, you can run the storagenode back either from the Services applet or from the elevated PowerShell:

Start-Service storagenode

then check your logs again.

2 Likes

Thank you so much!
Node is now running :slight_smile:

1 Like