DEBUG Fatal error: infodb: no such table: bandwidth_usage_rollups

I am also noticing this error, I have this in my logs;

2019-07-19T16:28:17.634800016Z 2019-07-19T16:28:17.634Z DEBUG kademlia:endpoint Successfully connected with vps.reyescolimited.com:28967
2019-07-19T16:28:23.895338065Z 2019-07-19T16:28:23.893Z DEBUG Fatal error: infodb: no such table: bandwidth_usage_rollups
2019-07-19T16:28:23.895659581Z storj.io/storj/storagenode/storagenodedb.(*bandwidthdb).Summary:119
2019-07-19T16:28:23.895670907Z storj.io/storj/storagenode/storagenodedb.(*bandwidthdb).MonthSummary:86
2019-07-19T16:28:23.895699048Z storj.io/storj/storagenode/monitor.(*Service).usedBandwidth:174
2019-07-19T16:28:23.895709224Z storj.io/storj/storagenode/monitor.(*Service).Run:83
2019-07-19T16:28:23.895717763Z storj.io/storj/storagenode.(*Peer).Run.func6:351
2019-07-19T16:28:23.895725320Z The Go Programming Language
2019-07-19T16:28:40.437019302Z 2019-07-19T16:28:40.435Z INFO Configuration loaded from: /app/config/config.yaml
2019-07-19T16:28:40.452321214Z 2019-07-19T16:28:40.452Z INFO Operator email: arcavious@hotmail.com
2019-07-19T16:28:40.461935741Z 2019-07-19T16:28:40.454Z INFO Operator wallet: 0x357bdf2a8b28eda9374cc3dbb6cc29310eab6c10
2019-07-19T16:28:40.462021919Z 2019-07-19T16:28:40.460Z DEBUG Binary Version: v0.15.3 with CommitHash 6217d9690fd7363d1ae35eefc12135ed6286a89f, built at 2019-07-17 18:42:01 +0000 UTC as Release true
2019-07-19T16:28:40.462032471Z 2019-07-19T16:28:40.461Z DEBUG debug server listening on 127.0.0.1:43223
2019-07-19T16:28:40.694260656Z 2019-07-19T16:28:40.694Z DEBUG allowed minimum version from control server is: v0.15.1
2019-07-19T16:28:40.694396748Z 2019-07-19T16:28:40.694Z INFO running on version v0.15.3
2019-07-19T16:28:40.695197877Z 2019-07-19T16:28:40.695Z DEBUG Initialized telemetry batcher with id = “1q3nvyXBt2xvj2C35uc4FnKcew9KYkn9XK578QxBm9yxb2rLtW”
2019-07-19T16:28:40.699715248Z 2019-07-19T16:28:40.699Z INFO db.migration Latest Version {“version”: 13}
2019-07-19T16:28:40.700692168Z 2019-07-19T16:28:40.700Z DEBUG piecestore:orderssender sending
2019-07-19T16:28:40.701329335Z 2019-07-19T16:28:40.701Z INFO piecestore:orderssender.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {“count”: 4}
2019-07-19T16:28:40.701979122Z 2019-07-19T16:28:40.701Z INFO vouchers Checking vouchers
2019-07-19T16:28:40.702435354Z 2019-07-19T16:28:40.702Z INFO vouchers Requesting voucher {“satellite”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2019-07-19T16:28:40.703003046Z 2019-07-19T16:28:40.702Z INFO Node 1q3nvyXBt2xvj2C35uc4FnKcew9KYkn9XK578QxBm9yxb2rLtW started
2019-07-19T16:28:40.703093495Z 2019-07-19T16:28:40.703Z INFO Public server started on [::]:28967
2019-07-19T16:28:40.703177517Z 2019-07-19T16:28:40.703Z INFO Private server started on 127.0.0.1:7778
2019-07-19T16:28:40.703647006Z 2019-07-19T16:28:40.703Z INFO piecestore:orderssender.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW sending {“count”: 5}
2019-07-19T16:28:40.703958197Z 2019-07-19T16:28:40.703Z INFO piecestore:orderssender.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {“count”: 11}
2019-07-19T16:28:40.704780622Z 2019-07-19T16:28:40.704Z INFO vouchers Requesting voucher {“satellite”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2019-07-19T16:28:40.705096976Z 2019-07-19T16:28:40.705Z INFO vouchers Requesting voucher {“satellite”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”}
2019-07-19T16:28:40.705437975Z 2019-07-19T16:28:40.705Z INFO vouchers Requesting voucher {“satellite”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2019-07-19T16:28:40.708845110Z 2019-07-19T16:28:40.708Z INFO piecestore:monitor Remaining Bandwidth {“bytes”: 9728788448256}
2019-07-19T16:28:40.739228398Z 2019-07-19T16:28:40.739Z DEBUG allowed minimum version from control server is: v0.15.1
2019-07-19T16:28:40.739355533Z 2019-07-19T16:28:40.739Z INFO running on version v0.15.3
2019-07-19T16:28:43.357471437Z 2019-07-19T16:28:43.357Z DEBUG kademlia:endpoint Successfully connected with vps.reyescolimited.com:28967
2019-07-19T16:28:44.949484718Z 2019-07-19T16:28:44.949Z INFO piecestore upload started {“Piece ID”: “SVK7JMO73LZKOJD2HNXXIID7WDMQU7ACLIMHQC4RWJAFRJ33FRLQ”, “SatelliteID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW”, “Action”: “PUT”}

Hello @xyphos10,
Welcome to the forum!

Have your node constantly restarts?

Yes it does, right now I just have it off. I tried restoring an earlier version of info.db that is good, it works for a while but the error starts later.

Restoring an earlier version is a bad idea, please do not do this as long any official support say that you should do this.

@Alexey Row 2 indicate that a expecting table is missing, sound not very well, this need a Dev! Maybe it can be fixed with DB upgrade, but i have no idea how this could happen.

The missing table is new since version 0.15.x perhaps the db migration didn’t fully complete or it happened when mounts weren’t set up correctly on a docker volume.

How could a node start without correct db migration? I think same like you the DB migration failed but db level got increased.

That’s the million dollar question. @Alexey was asking people who ran into the other db issue (uniqueness violation) to upload their info.db to https://alpha.transfer.sh and send him the link. Might be useful for this issue as well.

This is a different error. But DB is welcome

Okay I will upload the info.db later. Anything else you guys need from my end?

Okay so with a fresh docker install and new image pulled for storagenode, I got the same error and have attached my log fine and info.db

Log File -> https://transfer.sh/LcPQ4/node.log
Database -> https://transfer.sh/BhcpM/info.db

For now I think I will just retire this node since due to downtime I believe it will be disqualified.

We should try to fix it first.

1 Like

Okay, what would I have to do to try to fix it?

I sent this information to the developer. Please wait.

1 Like

Have you recovered this DB from the backup?

That db is the one that has been in use and gives the errors. Okay so I decided to just trash my current server and reinstall Ubuntu 18.04 fresh. I installed docker, pulled the storagenode image and started the node. My node has been only for a couple of hours now and I don’t see the errors anymore. I guess I will be monitoring it to see if the errors come up again. This is my node id: 1q3nvyXBt2xvj2C35uc4FnKcew9KYkn9XK578QxBm9yxb2rLtW

I hope you didn’t trash your data, otherwise it could be permanently disqualified even in alpha.

No I keep my os on a separate drive so all the data is still there.

1 Like

This is weird that the error is gone. It shouldn’t. But ok, keep an eye on it