Recovering bandwidth.db on windows after power failure (Solved)

So we had a temporary power cut nothing major.

But after everything coming online storj won’t make contact, I have included log output after startup

Let me know if I’m doing something wrong of if I can somehow fix it

Thanks;

2021-05-08T21:23:22.269+0200 INFO Configuration loaded {“Location”: “C:\Program Files\Storj\Storage Node\config.yaml”}
2021-05-08T21:23:22.370+0200 INFO Operator email {“Address”: “domco.pastorek@gmail.com”}
2021-05-08T21:23:22.370+0200 INFO Operator wallet {“Address”: “0x82eab86c0e32c8194806b6a847764a9a365bc28e”}
2021-05-08T21:23:23.046+0200 INFO Telemetry enabled {“instance ID”: “1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV”}
2021-05-08T21:23:23.242+0200 INFO db.migration Database Version {“version”: 51}
2021-05-08T21:23:26.166+0200 INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-05-08T21:23:27.016+0200 INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-05-08T21:23:27.017+0200 INFO Node 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV started
2021-05-08T21:23:27.017+0200 INFO bandwidth Performing bandwidth usage rollups
2021-05-08T21:23:27.017+0200 INFO trust Scheduling next refresh {“after”: “5h30m0.298215381s”}
2021-05-08T21:23:27.022+0200 ERROR bandwidth Could not rollup bandwidth usage {“error”: “bandwidthdb error: database disk image is malformed”, “errorVerbose”: “bandwidthdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:324\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.017+0200 INFO Public server started on [::]:28967
2021-05-08T21:23:27.025+0200 INFO Private server started on 127.0.0.1:7778
2021-05-08T21:23:27.026+0200 ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “K5WD2PCUDWMBIAJE6BBOTC52HDZDRKEPJYBW34ZBRANXJIMUQZ3Q”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.029+0200 ERROR collector unable to delete piece {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “FOFXN2KZGZOR37BCFZHMZHIFFZWCKX32FXASORDMEYP5Y7PNV3VA”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.174+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “attempts”: 1, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.208+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 1, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.645+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “attempts”: 1, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.655+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “attempts”: 1, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:27.925+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:28.057+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “attempts”: 1, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:28.320+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “attempts”: 2, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:28.381+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 2, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:28.926+0200 INFO piecestore upload started {“Piece ID”: “DEHEPQO7PG6EDEBYQMVVRSH7NN7E7YE5QKFHDRLKXYFRYBU3WCBA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 1826072894464}
2021-05-08T21:23:29.180+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “attempts”: 2, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:29.244+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “attempts”: 2, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:29.401+0200 ERROR contact:service ping satellite failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 2, “error”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address”, “errorVerbose”: “ping satellite error: failed to dial storage node (ID: 1snKVGtVgNaKzR2mZfReMUr4bep9U6eH947rLwbFzsk3u2y8mV) at address [::]:28967: rpc: dial tcp [::]:28967: connect: cannot assign requested address\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-05-08T21:23:30.112+0200 ERROR piecestore failed to add bandwidth usage {“error”: “bandwidthdb error: database disk image is malformed”, “errorVerbose”: “bandwidthdb error: database disk image is malformed\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:685\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:415\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:209\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2021-05-08T21:23:30.113+0200 INFO piecestore uploaded {“Piece ID”: “DEHEPQO7PG6EDEBYQMVVRSH7NN7E7YE5QKFHDRLKXYFRYBU3WCBA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 1024}

Make sure windows is using the same local IP looks like it may have lost the ip it was using before. Set a static IP for this for the future.
Also your bandwidth db is malformed.
Id recommand disabling write cache on this drive for this very issue, a ups would help also.

Is there a way to recover the bandwithDB or is it non necessary file?

Far as I remember its not 100% important its what keeps track of the web dashboard. Its still early in the month so its not a huge deal. Probably not worth recovering… You can always just back it up for now.

ye the issue is that the dashboard isn’t correctly displaying and also i have no data on storj Prometheus exporter, should I reinstall storj?

Id just check your config file to make sure everything matches local ip etc you can remove the bandwidth.db and put in another folder for now so it doesnt load it. Then restart the storagenode service.

1 Like

Yes tried it but bad news, no success
It recreated file, but it has a fatal error and immediately stops the node.

I know the node is working fine because it is having successful uploads and downloads but I would like the dashboard so I can monitor it

What is the error is it the same even removing the malformed database?

no this is what it gave me:

2021-05-08T22:41:04.805+0200 FATAL Unrecoverable error {“error”: “Error during preflight check for storagenode databases: preflight: database “bandwidth”: expected schema does not match actual: &dbschema.Schema{\n- \tTables: *dbschema.Table{\n- \t\t(\n- \t\t\ts”""\n- \t\t\tName: bandwidth_usage\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: created_at\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t\t(\n- \t\t\ts"""\n- \t\t\tName: bandwidth_usage_rollups\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: interval_start\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t},\n+ \tTables: nil,\n- \tIndexes: *dbschema.Index{\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: \"\">,\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: \"\">,\n- \t},\n+ \tIndexes: nil,\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:405\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:352\n\tmain.cmdRun:208\n\tstorj.io/private/process.cleanup.func1.4:363\n\tstorj.io/private/process.cleanup.func1:381\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:64\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57", “errorVerbose”: “Error during preflight check for storagenode databases: preflight: database “bandwidth”: expected schema does not match actual: &dbschema.Schema{\n- \tTables: *dbschema.Table{\n- \t\t(\n- \t\t\ts”""\n- \t\t\tName: bandwidth_usage\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: created_at\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t\t(\n- \t\t\ts"""\n- \t\t\tName: bandwidth_usage_rollups\n- \t\t\tColumns:\n- \t\t\t\tName: action\n- \t\t\t\tType: INTEGER\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: amount\n- \t\t\t\tType: BIGINT\n- \t\t\t\tNullable: false\n- \t\t\t\tDefault: “”\n- \t\t\t\tReference: nil\n- \t\t\t\tName: interval_start\n- \t\t\t\tType: TIMESTAMP\n- \t\t\t\tNullable: false\n- \t\t\t… // 10 elided lines\n- \t\t\ts"""\n- \t\t),\n- \t},\n+ \tTables: nil,\n- \tIndexes: *dbschema.Index{\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: \"\">,\n- \t\tsIndex<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: \"\">,\n- \t},\n+ \tIndexes: nil,\n }\n\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).preflight:405\n\tstorj.io/storj/storagenode/storagenodedb.(*DB).Preflight:352\n\tmain.cmdRun:208\n\tstorj.io/private/process.cleanup.func1.4:363\n\tstorj.io/private/process.cleanup.func1:381\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:64\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\n\tmain.cmdRun:210\n\tstorj.io/private/process.cleanup.func1.4:363\n\tstorj.io/private/process.cleanup.func1:381\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomConfig:88\n\tstorj.io/private/process.Exec:65\n\tmain.(*service).Execute.func1:64\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Did you remove all 3 of the bandwidth files to a backup location?

1 Like

sorry to bother but which ones exactly?
image

image Looks like windows is different

yes because once you stop the node all of the other files collapse into one it seems

Ok so then you will have to fix the malformed one, How to fix a "database disk image is malformed" – Storj
Thats why I didnt want you to delete it cause I was unsure if it could just create its own.

1 Like

Thanks I will attempt it tomorrow, I’m too tired now to focus on this task, thanks so far for all your help, Thanks

No problem hope you get it working again.

Hi again bad news, PowerShell isn’t cooperating any suggestions what to do?

its powershell, you have to explicitly use a .\

so try .\sqlite3 d:\bandwidth.db "PRAGMA integrity_check;"

1 Like

also, just looked through the logs - they point to a problem on the disk.

“error”: “pieces error: filestore error: file does not exist”, “errorVerbose”

which could be why the DB is corrupt - to be fair you don’t need the DB, but the other errors not good. Although it is the collector process, from the expired DB so could just be the pieces dont exist.

Just to be sure, probably worth from powershell do a;

Repair-Volume -DriveLetter D -Offlinescanandfix

I ran it and I got response “NoErrorsFound”