Piece space used error: no such table: piece_space_used

I still see this with 23.2; I can run with alpha but beta crashes. ubuntu 19.04 under hyper-v, direct attached vhdx file.

What is your error when the node crashed?

Docker log file:
{“log”:“2019-10-11T02:24:57.974Z\u0009\u001b[34mINFO\u001b[0m\u0009Configuration loaded from: /app/config/config.yaml\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:57.975085431Z”}
{“log”:“2019-10-11T02:24:58.021Z\u0009\u001b[34mINFO\u001b[0m\u0009Operator email: xxxxxxxxxxxxxxxxxxx\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.022357594Z”}
{“log”:“2019-10-11T02:24:58.022Z\u0009\u001b[34mINFO\u001b[0m\u0009operator wallet: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.022408296Z”}
{“log”:“2019-10-11T02:24:58.324Z\u0009\u001b[34mINFO\u001b[0m\u0009version\u0009running on version v0.21.1\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.325130963Z”}
{“log”:“2019-10-11T02:24:58.338Z\u0009\u001b[34mINFO\u001b[0m\u0009db.migration\u0009Database Version\u0009{“version”: 21}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.338373513Z”}
{“log”:"2019-10-11T02:24:58.344Z\u0009\u001b[31mERROR\u001b[0m\u0009piecestore:cacheUpdate\u0009CacheServiceInit error during initializing space usage cache GetTotal:\u0009{“error”: “piece space used error: no such table: piece_space_used”, “errorVerbose”: “piece space used error: no such table: piece_space_used\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).GetTotal:75\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Init:93\n\tmain.cmdRun:172\n\tstorj.io/storj/pkg/process.cleanup.func1.2:264\n\tstorj.io/storj/pkg/process.cleanup.func1:282\n\tgithub.com/spf13/cobra.(*Command).execute:762\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:852\n\tgithub.com/spf13/cobra.(*Command).Execute:800\n\tstorj.io/storj/pkg/process.Exec:73\n\tmain.main:296\n\truntime.main:203"}\n",“stream”:“stderr”,“time”:"2019-10-11T02:24:58.346349244Z”}
{“log”:“2019-10-11T02:24:58.344Z\u0009\u001b[31mERROR\u001b[0m\u0009Failed to initialize CacheService: piece space used error: no such table: piece_space_used\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346394646Z”}
{“log”:“2019-10-11T02:24:58.344Z\u0009\u001b[34mINFO\u001b[0m\u0009contact:chore\u0009Storagenode contact chore starting up\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346407446Z”}
{“log”:“2019-10-11T02:24:58.344Z\u0009\u001b[34mINFO\u001b[0m\u0009Node 127e94SNZi4EcHco61XX9tHoU18qH5ww76PwBBA69ffjii6xra9 started\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346418647Z”}
{“log”:“2019-10-11T02:24:58.344Z\u0009\u001b[34mINFO\u001b[0m\u0009Public server started on [::]:28967\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346429347Z”}
{“log”:“2019-10-11T02:24:58.344Z\u0009\u001b[34mINFO\u001b[0m\u0009Private server started on 127.0.0.1:7778\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346439548Z”}
{“log”:“2019-10-11T02:24:58.344Z\u0009\u001b[34mINFO\u001b[0m\u0009bandwidth\u0009Performing bandwidth usage rollups\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346450148Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009version\u0009Failed to do periodic version check: Get https://version.alpha.storj.io: context canceled\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346460749Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009bandwidth\u0009Could not rollup bandwidth usage\u0009{“error”: “bandwidthdb error: no such table: bandwidth_usage_rollups”, “errorVerbose”: “bandwidthdb error: no such table: bandwidth_usage_rollups\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:259\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/storagenode.(*Peer).Run.func9:446\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346471849Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009piecestore:cacheUpdate\u0009error getting current space used calculation: \u0009{“error”: “context canceled”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.34648815Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009collector\u0009error during collecting pieces: \u0009{“error”: “piece expiration error: context canceled”, “errorVerbose”: “piece expiration error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:44\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:316\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:85\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:52\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run:51\n\tstorj.io/storj/storagenode.(*Peer).Run.func4:430\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.34649985Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009piecestore:cacheUpdate\u0009error during init space usage db: \u0009{“error”: “piece space used error: no such table: piece_space_used”, “errorVerbose”: “piece space used error: no such table: piece_space_used\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:49\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:59\n\tstorj.io/storj/storagenode.(*Peer).Run.func7:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346539152Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009piecestore:cacheUpdate\u0009error persisting cache totals to the database: \u0009{“error”: “piece space used error: context canceled”, “errorVerbose”: “piece space used error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdateTotal:121\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:82\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:68\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:63\n\tstorj.io/storj/storagenode.(*Peer).Run.func7:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346554653Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009orders\u0009cleaning archive\u0009{“error”: “ordersdb error: no such table: order_archive_”, “errorVerbose”: “ordersdb error: no such table: order_archive_\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).CleanArchive:326\n\tstorj.io/storj/storagenode/orders.(*Service).cleanArchive:137\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/internal/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346570853Z”}
{“log”:“2019-10-11T02:24:58.345Z\u0009\u001b[31mERROR\u001b[0m\u0009orders\u0009listing orders\u0009{“error”: “ordersdb error: no such table: unsent_order”, “errorVerbose”: “ordersdb error: no such table: unsent_order\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).ListUnsentBySatellite:140\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders:153\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/internal/sync2.(*Cycle).Start.func1:68\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:58.346585154Z”}
{“log”:“2019-10-11T02:24:59.152Z\u0009\u001b[34mINFO\u001b[0m\u0009Got a signal from the OS: “terminated”\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:59.153052034Z”}
{“log”:“2019-10-11T02:24:59.364Z\u0009\u001b[31mFATAL\u001b[0m\u0009Unrecoverable error\u0009{“error”: “bandwidthdb error: no such table: bandwidth_usage_rollups”, “errorVerbose”: “bandwidthdb error: no such table: bandwidth_usage_rollups\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Summary:112\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).MonthSummary:79\n\tstorj.io/storj/storagenode/monitor.(*Service).usedBandwidth:174\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:83\n\tstorj.io/storj/storagenode.(*Peer).Run.func6:436\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}\n”,“stream”:“stderr”,“time”:“2019-10-11T02:24:59.364885529Z”}

Have you used the beta tag when you pull the image and run the container?
Have you restored the databases from the backup?

I can generate this error at will by following process:
1: docker stop node
2: Docker rm node
3: Run this .sh with alpha changed to beta
#!/bin/bash

docker run -d --restart unless-stopped -p 28967:28967 -p 14002:14002
-e WALLET=“xxxxxxxxxxxxxxxxxxxx”
-e EMAIL=“xxxxxxxxxxxxxxxxx
-e ADDRESS=“xxxxxxxxxxxxxxxxxx”
-e BANDWIDTH=“20TB”
-e STORAGE=“550GB”
–mount type=bind,source=”/home/gregsachs/.local/share/storj/identity/storagenode",destination=/app/identity
–mount type=bind,source="/mnt/5222779d-0126-4c43-a34e-d95f0fc904b4",destination=/app/config
–name storagenode_test storjlabs/storagenode:alpha
Repeating the process and changing back to alpha returns it to functional.
Is there some other action I need to be taking to move from alpha to beta?

yes, change this part in your run command

But before you do, pull the beta image to make sure you have the latest version

docker pull storjlabs/storagenode:beta

Then follow the steps you mentioned with the corrected run command.

That fixed it, I wasn’t doing the pull, only changing the alpha to beta in the run command

hi;

am currently running v1.1.1 (on dokcers) and am seeing the following

the watchtower didnt update me to v.1.3.3.3 did something changed?

i read that we no longer using the tag :beta that is right? i didnt found this in the documentation

and if i try to upgrade the node manually to none beta tag and start it i got the folloing error

2020-04-27T12:44:12.219Z ERROR bandwidth bandwidth/service.go:55 Could not rollup bandwidth usage {“error”: “bandwidthdb error: no such table: bandwidth_usage_rollups”, “errorVerbose”: “bandwidthdb error: no such table: bandwidth_usage_rollups\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Rollup:259\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Rollup:53\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/bandwidth.(*Service).Run:45\n\tstorj.io/storj/storagenode.(*Peer).Run.func9:446\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
storj.io/storj/storagenode/bandwidth.(*Service).Rollup
/go/src/storj.io/storj/storagenode/bandwidth/service.go:55
storj.io/storj/internal/sync2.(*Cycle).Run
/go/src/storj.io/storj/internal/sync2/cycle.go:87
storj.io/storj/storagenode/bandwidth.(*Service).Run
/go/src/storj.io/storj/storagenode/bandwidth/service.go:45
storj.io/storj/storagenode.(*Peer).Run.func9
/go/src/storj.io/storj/storagenode/peer.go:446
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-04-27T12:44:12.241Z ERROR piecestore:cacheUpdate pieces/cache.go:49 error getting current space used calculation: {“error”: “context canceled”}
storj.io/storj/storagenode/pieces.(*CacheService).Run
/go/src/storj.io/storj/storagenode/pieces/cache.go:49
storj.io/storj/storagenode.(*Peer).Run.func7
/go/src/storj.io/storj/storagenode/peer.go:439
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-04-27T12:44:12.242Z ERROR piecestore:cacheUpdate pieces/cache.go:60 error during init space usage db: {“error”: “piece space used error: no such table: piece_space_used”, “errorVerbose”: “piece space used error: no such table: piece_space_used\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:49\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:59\n\tstorj.io/storj/storagenode.(*Peer).Run.func7:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
storj.io/storj/storagenode/pieces.(*CacheService).Run
/go/src/storj.io/storj/storagenode/pieces/cache.go:60
storj.io/storj/storagenode.(*Peer).Run.func7
/go/src/storj.io/storj/storagenode/peer.go:439
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-04-27T12:44:12.243Z ERROR piecestore:cacheUpdate pieces/cache.go:69 error persisting cache totals to the database: {“error”: “piece space used error: context canceled”, “errorVerbose”: “piece space used error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdateTotal:121\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:82\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:68\n\tstorj.io/storj/internal/sync2.(*Cycle).Run:87\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:63\n\tstorj.io/storj/storagenode.(*Peer).Run.func7:439\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
storj.io/storj/storagenode/pieces.(*CacheService).Run.func1
/go/src/storj.io/storj/storagenode/pieces/cache.go:69
storj.io/storj/internal/sync2.(*Cycle).Run
/go/src/storj.io/storj/internal/sync2/cycle.go:87
storj.io/storj/storagenode/pieces.(*CacheService).Run
/go/src/storj.io/storj/storagenode/pieces/cache.go:63
storj.io/storj/storagenode.(*Peer).Run.func7
/go/src/storj.io/storj/storagenode/peer.go:439
golang.org/x/sync/errgroup.(*Group).Go.func1
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57
2020-04-27T12:44:12.251Z FATAL process/exec_conf.go:288 Unrecoverable error {“error”: “bandwidthdb error: no such table: bandwidth_usage_rollups”, “errorVerbose”: “bandwidthdb error: no such table: bandwidth_usage_rollups\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Summary:112\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).MonthSummary:79\n\tstorj.io/storj/storagenode/monitor.(*Service).usedBandwidth:174\n\tstorj.io/storj/storagenode/monitor.(*Service).Run:83\n\tstorj.io/storj/storagenode.(*Peer).Run.func6:436\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
storj.io/storj/pkg/process.cleanup.func1
/go/src/storj.io/storj/pkg/process/exec_conf.go:288
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:852
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.3/command.go:800
storj.io/storj/pkg/process.Exec
/go/src/storj.io/storj/pkg/process/exec_conf.go:73
main.main
/go/src/storj.io/storj/cmd/storagenode/main.go:296
runtime.main
/usr/local/go/src/runtime/proc.go:203

which will be the best way to upgrade?

Wait until watchtower updates your storage node. We are pusing the docker images last because for some reason everyone would like to update in the first 5 minutes which is a risk for the network.

are we going to stop using the :beta tag?

if so will watchtower still update the node?

No. We even still support the alpha tag.

1 Like

Could you show where you read that information ?

for example this post

or this reddit

Trust only the official documentation. If and when there is any change in tag or parameters documentation will show you the correct way.

2 Likes