Database "bandwidth": expected schema does not match actual &dbschema.Schema

Hello there,
I’m having a node on linux that’s been shutdown without properly stopping the service, resulting in a malformed bandwidth.db

I’ve followed the following instructions:
-https://support.storj.io/hc/en-us/articles/360029309111-How-to-fix-a-database-disk-image-is-malformed
&
-Used_serial.db malformed - #4 by Alexey

tried to run the node again, exited with following error(s):

Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
  	Tables: []*dbschema.Table{
  		&{
- 			Name: "bandwidth_usage",
+ 			Name: "bandwidth_esage_rollups",
  			Columns: []*dbschema.Column{
  				&{Name: "action", Type: "INTEGER"},
  				&{Name: "amount", Type: "BIGINT"},
  				&{
- 					Name:       "created_at",
+ 					Name:       "interval_start",
  					Type:       "TIMESTAMP",
  					IsNullable: false,
  					... // 2 identical fields
  				},
  				&{Name: "satellite_id", Type: "BLOB"},
  			},
- 			PrimaryKey: nil,
+ 			PrimaryKey: []string{"action", "interval_start", "satellite_id"},
  			Unique:     nil,
  			Checks:     nil,
  		},
  		&{
- 			Name: "bandwidth_usage_rollups",
+ 			Name: "bandwidth_usage",
  			Columns: []*dbschema.Column{
  				&{Name: "action", Type: "INTEGER"},
  				&{Name: "amount", Type: "BIGINT"},
  				&{
- 					Name:       "interval_start",
+ 					Name:       "created_at",
  					Type:       "TIMESTAMP",
  					IsNullable: false,
  					... // 2 identical fields
  				},
  				&{Name: "satellite_id", Type: "BLOB"},
  			},
- 			PrimaryKey: []string{"action", "interval_start", "satellite_id"},
+ 			PrimaryKey: nil,
  			Unique:     nil,
  			Checks:     nil,
  		},
  	},
  	Indexes:   {&{Name: "idx_bandwidth_usage_created", Table: "bandwidth_usage", Columns: {"created_at"}}, &{Name: "idx_bandwidth_usage_satellite", Table: "bandwidth_usage", Columns: {"satellite_id"}}},
  	Sequences: nil,
  }

	storj.io/storj/storagenode/storagenodedb.(*DB).preflight:429
	storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:376
	main.cmdRun:108
	main.newRunCmd.func1:32
	storj.io/private/process.cleanup.func1.4:393
	storj.io/private/process.cleanup.func1:411
	github.com/spf13/cobra.(*Command).execute:852
	github.com/spf13/cobra.(*Command).ExecuteC:960
	github.com/spf13/cobra.(*Command).Execute:897
	storj.io/private/process.ExecWithCustomOptions:112
	main.main:30
	runtime.main:267
2024-02-22 20:06:30,329 INFO stopped: storagenode (exit status 1)
2024-02-22 20:06:30,329 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

I then tried: https://support.storj.io/hc/en-us/articles/4403032417044-How-to-fix-database-file-is-not-a-database-error
that this post suggested: Error: Error during preflight check for storagenode databases: preflight: database "notifications": expected schema does not match actual: - #2 by Alexey

find /mnt/storj/storagenode/storage/ -maxdepth 1 -iname “*.db” -print0 -exec sqlite3 ‘{}’ ‘PRAGMA integrity_check;’ ‘;’

output says that all db are “ok”

But i’m still getting the same error, at this point i’ve no idea of what else to try, any help please?

If it’s preventing the node from running… database files are just for user-level reporting and have nothing to do with general operations/payouts. Which means you can delete them and let them be recreated fresh and the only downside is you’ll have inaccurate reports for Feb (and they’ll restart next month)

So, can you live with inaccurate graphs for a week? If so, just delete and move on.

deleting bandwidth.db and letting the node create a fresh one just gives another error:

2024-02-22 21:10:54,108 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Error: Error during preflight check for storagenode databases: preflight: database "bandwidth": expected schema does not match actual:   &dbschema.Schema{
- 	Tables: []*dbschema.Table{
- 		(
- 			s"""
- 			Name: bandwidth_usage
- 			Columns:
- 				Name: action
- 				Type: INTEGER
- 				Nullable: false
- 				Default: ""
- 				Reference: nil
- 				Name: amount
- 				Type: BIGINT
- 				Nullable: false
- 				Default: ""
- 				Reference: nil
- 				Name: created_at
- 				Type: TIMESTAMP
- 				Nullable: false
- 			... // 12 elided lines
- 			s"""
- 		),
- 		(
- 			s"""
- 			Name: bandwidth_usage_rollups
- 			Columns:
- 				Name: action
- 				Type: INTEGER
- 				Nullable: false
- 				Default: ""
- 				Reference: nil
- 				Name: amount
- 				Type: BIGINT
- 				Nullable: false
- 				Default: ""
- 				Reference: nil
- 				Name: interval_start
- 				Type: TIMESTAMP
- 				Nullable: false
- 			... // 12 elided lines
- 			s"""
- 		),
- 	},
+ 	Tables: nil,
- 	Indexes: []*dbschema.Index{
- 		s`Index<Table: bandwidth_usage, Name: idx_bandwidth_usage_created, Columns: created_at, Unique: false, Partial: "">`,
- 		s`Index<Table: bandwidth_usage, Name: idx_bandwidth_usage_satellite, Columns: satellite_id, Unique: false, Partial: "">`,
- 	},
+ 	Indexes:   nil,
  	Sequences: nil,
  }

	storj.io/storj/storagenode/storagenodedb.(*DB).preflight:429
	storj.io/storj/storagenode/storagenodedb.(*DB).Preflight:376
	main.cmdRun:108
	main.newRunCmd.func1:32
	storj.io/private/process.cleanup.func1.4:393
	storj.io/private/process.cleanup.func1:411
	github.com/spf13/cobra.(*Command).execute:852
	github.com/spf13/cobra.(*Command).ExecuteC:960
	github.com/spf13/cobra.(*Command).Execute:897
	storj.io/private/process.ExecWithCustomOptions:112
	main.main:30
	runtime.main:267
2024-02-22 21:10:56,184 INFO exited: storagenode (exit status 1; not expected)
2024-02-22 21:10:57,188 INFO spawned: 'storagenode' with pid 38
2024-02-22 21:10:57,188 WARN received SIGQUIT indicating exit request
2024-02-22 21:10:57,191 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-02-22T21:10:57Z	INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode-updater"}
2024-02-22 21:10:57,192 INFO stopped: storagenode-updater (exit status 0)
2024-02-22 21:10:58,195 INFO stopped: storagenode (terminated by SIGTERM)
2024-02-22 21:10:58,195 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

You probably need to remove them all as the guide suggests here…

2 Likes

I’m not sure to understand, since every .db appears as “ok”?

But anyway, just did it and here’s the log (keeps exiting):

2024-02-22T22:43:40Z	INFO	Version is up to date	{"Process": "storagenode-updater", "Service": "storagenode-updater"}
2024-02-22 22:43:41,480 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-22 22:43:41,480 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-22 22:43:41,481 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-22T22:44:35Z	ERROR	piecestore	upload failed	{"process": "storagenode", "Piece ID": "VXKF7YFOMHNA6P4ZYHIWDJJ5VW7UKYU4TF33EQOUQ5TTTSJMFAMQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "manager closed: unexpected EOF", "errorVerbose": "manager closed: unexpected EOF\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:225\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:96\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:226", "Size": 1114112, "Remote Address": "79.127.219.33:35998"}
2024-02-22 22:44:42,307 WARN received SIGTERM indicating exit request
2024-02-22 22:44:42,309 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-02-22T22:44:42Z	INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode-updater"}
2024-02-22 22:44:42,317 INFO stopped: storagenode-updater (exit status 0)
2024-02-22T22:44:42Z	ERROR	pieces:trash	emptying trash failed	{"process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash:176\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:416\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-02-22T22:44:42Z	ERROR	pieces	failed to lazywalk space used by satellite	{"process": "storagenode", "error": "lazyfilewalker: signal: killed", "errorVerbose": "lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:83\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-02-22T22:44:42Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "context canceled"}
2024-02-22T22:44:42Z	ERROR	pieces	failed to lazywalk space used by satellite	{"process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-02-22T22:44:42Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "context canceled"}
2024-02-22T22:44:42Z	ERROR	pieces	failed to lazywalk space used by satellite	{"process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-02-22T22:44:42Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "context canceled"}
2024-02-22T22:44:42Z	ERROR	pieces	failed to lazywalk space used by satellite	{"process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-02-22T22:44:42Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"process": "storagenode", "satelliteID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "context canceled"}
2024-02-22T22:44:42Z	ERROR	pieces	failed to lazywalk space used by satellite	{"process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2024-02-22T22:44:42Z	ERROR	lazyfilewalker.used-space-filewalker	failed to start subprocess	{"process": "storagenode", "satelliteID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "error": "context canceled"}
2024-02-22T22:44:42Z	ERROR	pieces	failed to lazywalk space used by satellite	{"process": "storagenode", "error": "lazyfilewalker: context canceled", "errorVerbose": "lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:717\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2024-02-22T22:44:42Z	ERROR	piecestore:cache	error getting current used space: 	{"process": "storagenode", "error": "filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled", "errorVerbose": "group:\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:726\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:57\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2024-02-22 22:44:45,373 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-02-22 22:44:48,377 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-02-22 22:44:51,381 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-02-22 22:44:52,383 WARN killing 'storagenode' (51) with SIGKILL
2024-02-22 22:44:52,455 INFO stopped: storagenode (terminated by SIGKILL)
2024-02-22 22:44:52,456 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

That entry looks suspicious to me: like the node was told to shut down (and the rest of the log entries were just it waiting for some subprocesses to also exit before it killed itself).

I don’t know what could send an OS-level signal to exit (if you didn’t stop it) - maybe the update utility? It looked like everyone was pushed to 1.96.6 in the last day or so. Are their any log entries earlier that sound like it thinks you’re out of disk space or something?

So I guess I don’t know what to check next. A fsck of the filesystem? Remove/recreate the container?

(if you didn’t stop it)

nope, didn’t.
fsck is still running for now, not on the filesystem tho but i’ll do it right after the node’s HDD check is done.

Are their any log entries earlier that sound like it thinks you’re out of disk space or something?

Nope, at least, none that i can see/read.

will post an update once the fsck checks are done.

hooray

in case anyone gets to this thread in the future: fsck seems to have fixed the issue

sudo fsck -y /dev/sdb2


fsck from util-linux 2.37.2
e2fsck 1.46.5 (30-Dec-2021)
/dev/sdb2 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Directory inode 25231374, block #1771: directory passes checks but fails checksum.
Fix? yes

Problem in HTREE directory inode 25231374: block #2070 has bad min hash
Invalid HTREE directory inode 25231374 (/storagenode/storage/temp).  Clear HTree index? yes

Pass 3: Checking directory connectivity
Pass 3A: Optimizing directories
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/sdb2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb2: 42842374/244191232 files (0.5% non-contiguous), 1683125923/1953502208 blocks

i’m getting many “upload/download failed” errors but the node keeps running for now, i’ll monitor this & come back if needed.

thanks for your time & answers @Roxor

3 Likes

Hello @21grammes,
Welcome back!

You need to run fsck one more time (it could find more errors), you need to run it until it will stop modifying the file system.

Regarding databases, yes to recreate one database you need to move them all when the node is stopped, then run the node to allow to recreate all databases, stop it again and move databases back except those ones which do you want to recreate.

1 Like

Ohh alright, I get it now!

and move databases back except those ones which do you want to recreate.

That’s the part that I didn’t really get earlier.
well, the node seems to be running pretty ok for now, so i’m down to have no graph/stats for X amount of time (if that’s indeed just an user-level graph related thing, as Roxor pointed out earlier.) it’s whatever, as long as the node is healthy!

Noted for the fsck, thanks for the tip!

1 Like