Аccidentally removed all databases

Hello guys,
If I accidentally removed all databases from this folder, is it possiblr to repair?


I tried to do it by starting the node ans then did this commands:

cd c:\sqlite
cp F:\StorjD1.6\storage\bandwidth.db F:\StorjD1.6\storage\bandwidth.db.bak

cd c:\sqlite
c:\sqlite\sqlite3.exe F:\StorjD1.6\storage\bandwidth.db

.mode insert
.output F:\StorjD1.6\dump_all.sql
.dump
.exit

Get-Content F:\StorjD1.6\dump_all.sql | Select-String -NotMatch TRANSACTION | Select-String -NotMatch ROLLBACK | Select-String -NotMatch COMMIT | Set-Content -Encoding utf8 F:\StorjD1.6\dump_all_notrans.sql

rm F:\StorjD1.6\storage\bandwidth.db

c:\sqlite\sqlite3.exe F:\StorjD1.6\storage\bandwidth.db “.read F:/StorjD1.6/dump_all_notrans.sql”
Слэш между StorjD1.6\dump_all_notrans.sql

But size of bandwidth.db remain same 32kb (too small). Is it correct and shall I proceed same way with other databases?

Regards

1 Like

If the database is gone… doesn’t that just mean your client-side stats for this month won’t be accureate? Like the node is still uploading/downloading fine, and payouts will work properly… just your stats won’t be accurate until they restart Jan 1?

(sorry that doesn’t answer your recovery question: just saying you could ignore the deleted db and it would fix itself in 2 weeks?)

1 Like

This document describes how to recreate databbases

There is no critical information in those databases, just stats, I would not worry about them at all

1 Like

If you deleted all databases, then when you started a node they were re-created with no data inside. So there is nothing to repair, all databases are now empty. If you do not have a backup of these databases, the Stat and the historical information are lost.
The node will collect a new stat, but will not recover a historical data. However, it doesn’t affect the reputation of the node, it will still work and being paid for used storage and egress bandwidth.

1 Like

… I did exactly this. File bandwidth.db remain same volume. Is it normal?

вс, 17 дек. 2023 г., 23:55 arrogantrabbit via Storj Community Forum (official) <storj@literatehosting.com>:

1 Like

You do not have to repair anything if you recreated databases. They are empty. Unload nothing and import nothing will give the same result - the empty database.

The repair is just unload readable data from the corrupted database and load them to the empty one. A new database doesn’t have any data, it’s not corrupted, it’s empty.

Hello Alexey,
The problem is,that node doesn’t start with those logs:

“C:\Users\aka>docker logs --tail 50 storagenodeD1.6
2023-12-18T07:14:30Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-12-18T07:14:30Z ERROR pieces:trash emptying trash failed {“process”: “storagenode”, “error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash:176\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:416\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89”}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory
2023-12-18 07:14:31,707 INFO exited: storagenode (exit status 1; not expected)
2023-12-18 07:14:32,710 INFO spawned: ‘storagenode’ with pid 79
2023-12-18 07:14:32,710 WARN received SIGQUIT indicating exit request
2023-12-18 07:14:32,710 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2023-12-18T07:14:32Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2023-12-18 07:14:32,711 INFO stopped: storagenode-updater (exit status 0)
2023-12-18T07:14:32Z INFO Anonymized tracing enabled {“process”: “storagenode”}
2023-12-18T07:14:32Z INFO Operator email {“process”: “storagenode”, “Address”: “7437493@gmail.com”}
2023-12-18T07:14:32Z INFO Operator wallet {“process”: “storagenode”, “Address”: “0x8675290882f594227d9b69d1fc434bf54b2b5e6f”}
2023-12-18T07:14:33Z INFO server kernel support for tcp fast open unknown {“process”: “storagenode”}
2023-12-18T07:14:33Z INFO Telemetry enabled {“process”: “storagenode”, “instance ID”: “12WKuzikpyAe7VYE3PKXAWs1pLF3Zk4WzR5g3hRuBCSBFZjqX6W”}
2023-12-18T07:14:33Z INFO Event collection enabled {“process”: “storagenode”, “instance ID”: “12WKuzikpyAe7VYE3PKXAWs1pLF3Zk4WzR5g3hRuBCSBFZjqX6W”}
2023-12-18T07:14:34Z INFO db.migration Database Version {“process”: “storagenode”, “version”: 54}
2023-12-18T07:14:34Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock. {“process”: “storagenode”}
2023-12-18T07:14:35Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock. {“process”: “storagenode”}
2023-12-18 07:14:35,736 INFO waiting for storagenode, processes-exit-eventlistener to die
2023-12-18T07:14:35Z INFO bandwidth Performing bandwidth usage rollups {“process”: “storagenode”}
2023-12-18T07:14:35Z INFO Node 12WKuzikpyAe7VYE3PKXAWs1pLF3Zk4WzR5g3hRuBCSBFZjqX6W started {“process”: “storagenode”}
2023-12-18T07:14:35Z INFO Public server started on [::]:7777 {“process”: “storagenode”}
2023-12-18T07:14:35Z INFO Private server started on 127.0.0.1:7778 {“process”: “storagenode”}
2023-12-18T07:14:35Z INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See UDP Buffer Sizes · quic-go/quic-go Wiki · GitHub for details. {“process”: “storagenode”}
2023-12-18T07:14:35Z INFO trust Scheduling next refresh {“process”: “storagenode”, “after”: “7h13m3.762600133s”}2023-12-18T07:14:35Z INFO pieces:trash emptying trash started {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-12-18T07:14:35Z WARN piecestore:monitor Disk space is less than requested. Allocated space is {“process”: “storagenode”, “bytes”: 1042735902720}
2023-12-18T07:14:35Z ERROR services unexpected shutdown of a runner {“process”: “storagenode”, “name”: “piecestore:monitor”, “error”: “piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory”, “errorVerbose”: “piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:158\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:141\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-12-18T07:14:35Z ERROR nodestats:cache Get pricing-model/join date failed {“process”: “storagenode”, “error”: “context canceled”}
2023-12-18T07:14:35Z INFO lazyfilewalker.used-space-filewalker starting subprocess {“process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2023-12-18T07:14:35Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “error”: “context canceled”}
2023-12-18T07:14:35Z ERROR pieces failed to lazywalk space used by satellite {“process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2023-12-18T07:14:35Z INFO lazyfilewalker.used-space-filewalker starting subprocess {“process”: “storagenode”, “satelliteID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2023-12-18T07:14:35Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“process”: “storagenode”, “satelliteID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “error”: “context canceled”}
2023-12-18T07:14:35Z ERROR pieces failed to lazywalk space used by satellite {“process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2023-12-18T07:14:35Z INFO lazyfilewalker.used-space-filewalker starting subprocess {“process”: “storagenode”, “satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-12-18T07:14:35Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“process”: “storagenode”, “satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “error”: “context canceled”}
2023-12-18T07:14:35Z ERROR pieces failed to lazywalk space used by satellite {“process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-12-18T07:14:35Z ERROR piecestore:cache error getting current used space: {“process”: “storagenode”, “error”: “filewalker: context canceled; filewalker: context canceled; filewalker: context canceled”, “errorVerbose”: “group:\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-12-18T07:14:35Z ERROR version failed to get process version info {“process”: “storagenode”, “error”: “version checker client: Get "https://version.storj.io": context canceled”, “errorVerbose”: “version checker client: Get "https://version.storj.io": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:108\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:101\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:75\n\tstorj.io/storj/storagenode/version.(*Chore).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-12-18T07:14:35Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}
2023-12-18T07:14:35Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
2023-12-18T07:14:35Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}2023-12-18T07:14:35Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2023-12-18T07:14:35Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}2023-12-18T07:14:35Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2023-12-18T07:14:35Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}2023-12-18T07:14:35Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2023-12-18T07:14:35Z ERROR pieces:trash emptying trash failed {“process”: “storagenode”, “error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash:176\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:416\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89”}
2023-12-18T07:14:35Z ERROR gracefulexit:chore error retrieving satellites. {“process”: “storagenode”, “error”: “satellitesdb: context canceled”, “errorVerbose”: “satellitesdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits.func1:195\n\tstorj.io/storj/storagenode/storagenodedb.(*satellitesDB).ListGracefulExits:207\n\tstorj.io/storj/storagenode/gracefulexit.(*Service).ListPendingExits:59\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).AddMissing:58\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/gracefulexit.(*Chore).Run:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}”

What could it be?

Regards,
Alexander

So seems you removed not only databases but also storage? If this is the case - you may uninstall the node/remove the container, delete all data and identity and start from scratch: generate (not copy!) a new identity, sign it with a new authorization token and start with a clean storage.

If you didn’t remove the storage, make sure that your --mount type=bind,source=F:\StorjD1.6,destination=/app/config option contains a correct path to the data location. The blobs folder shouldn’t be empty.

Please show the content of F:\StorjD1.6\

If the folder F:\StorjD1.6\storage\blobs is not empty and you do not have a folder F:\StorjD1.6\blobs, but the file F:\StorjD1.6\storage\storage-dir-verification is absent (seems you removed it too?), you may re-create it by using the SETUP step. Be careful, you must remove the config.yaml file from F:\StorjD1.6\ and provide a correct path to the storage location F:\StorjD1.6\, and to the identity folder of this node in your docker run -it --rm -e SETUP=true ...... command and never execute it again.

1 Like

Thanks Alexey, clear

вт, 19 дек. 2023 г. в 05:36, Alexey via Storj Community Forum (official) <storj@literatehosting.com>:

1 Like