I’ve been running a node on Linux since October, and have had no problems until today. The dashboard wouldn’t load, so I had a poke at docker. It appears that when attempting to load the container after an update there are missing records from the database.
I find the prospect of it being a result of bit-rot vanishlingly unlikely - the storage location is my fileserver - all the storage is redundant and regularly scrubbed. All the data is local (no NFS etc).
An integrity check of all the sqlite databases seems to show they’re ok - it seems unlikely that random corruption would have resulted in a database which is valid but missing records.
I’ve pasted the log from restart below. Anyone got any ideas? I’m reticent to ditch the whole thing and make a new node without some assurance that things won’t go south again in 4 months time (and my node was getting enough traffic to cover the cost of my 1000/1000 internet connection here in the sunny UK).
Many thanks
2020-03-31T20:43:13.712Z INFO storagenode/peer.go:452 Node <NODE_ID> started
2020-03-31T20:43:13.712Z INFO storagenode/peer.go:453 Public server started on [::]:28967
2020-03-31T20:43:13.712Z INFO storagenode/peer.go:454 Private server started on 127.0.0.1:7778
2020-03-31T20:43:13.713Z ERROR piecestore:cacheUpdate pieces/cache.go:49 error getting current space used calculation: {"error": "context canceled"}
storj.io/storj/storagenode/pieces.(*CacheService).Run
/go/src/storj.io/storj/storagenode/pieces/cache.go:49
storj.io/storj/storagenode.(*Peer).Run.func7
/go/src/storj.io/storj/storagenode/peer.go:439
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-03-31T20:43:13.713Z ERROR piecestore:cacheUpdate pieces/cache.go:60 error during init space usage db: {"error": "piece space used error: no such table: piece_space_used", "errorVerbose": "piece space used error: no such table: piece_space_used\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:49\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:59\n\tstorj.io/storj/storagenode.(*Peer).Run.func7:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
storj.io/storj/storagenode/pieces.(*CacheService).Run
/go/src/storj.io/storj/storagenode/pieces/cache.go:60
storj.io/storj/storagenode.(*Peer).Run.func7
/go/src/storj.io/storj/storagenode/peer.go:439
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-03-31T20:43:13.713Z ERROR piecestore:cacheUpdate pieces/cache.go:69 error persisting cache totals to the database: {"error": "piece space used error: context canceled", "errorVerbose": "piece space used error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdateTotal:121\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:82\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:68\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:63\n\tstorj.io/storj/storagenode.(*Peer).Run.func7:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
storj.io/storj/storagenode/pieces.(*CacheService).Run.func1
/go/src/storj.io/storj/storagenode/pieces/cache.go:69
storj.io/storj/internal/sync2.(*Cycle).Run
/go/src/storj.io/storj/internal/sync2/cycle.go:87
storj.io/storj/storagenode/pieces.(*CacheService).Run
/go/src/storj.io/storj/storagenode/pieces/cache.go:63
storj.io/storj/storagenode.(*Peer).Run.func7
/go/src/storj.io/storj/storagenode/peer.go:439
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-03-31T20:43:13.713Z ERROR orders orders/service.go:156 listing orders {"error": "ordersdb error: no such table: unsent_order", "errorVerbose": "ordersdb error: no such table: unsent_order\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).ListUnsentBySatellite:140\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders:153\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/internal/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
storj.io/storj/storagenode/orders.(*Service).sendOrders
/go/src/storj.io/storj/storagenode/orders/service.go:156
storj.io/storj/internal/sync2.(*Cycle).Run
/go/src/storj.io/storj/internal/sync2/cycle.go:87
storj.io/storj/internal/sync2.(*Cycle).Start.func1
/go/src/storj.io/storj/internal/sync2/cycle.go:68
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-03-31T20:43:13.715Z FATAL process/exec_conf.go:288 Unrecoverable error {"error": "bandwidthdb error: no such table: bandwidth_usage_rollups", "errorVerbose": "bandwidthdb error: no such table: bandwidth_usage_rollups\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Summary:112\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).MonthSummary:79\n\tstorj.io/storj/storagenode/monitor.(*Service).usedBandwidth:174\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:83\n\tstorj.io/storj/storagenode.(*Peer).Run.func6:436\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
storj.io/storj/pkg/process.cleanup.func1
/go/src/storj.io/storj/pkg/process/exec_conf.go:288
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec
/go/src/storj.io/storj/pkg/process/exec_conf.go:73
main.main
/go/src/storj.io/storj/cmd/storagenode/main.go:296
runtime.main
/usr/local/go/src/runtime/proc.go:203