Something crashed today (updater?)

Hello. My node crashed today. It stopped, then restarted, then stopped again. Strange things happening. Looking at the log i see a lot of errors about updater.
Any suggestions on how to solve it?

Thank you.

2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “kademlia.operator.email”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.address”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.debug-log-traffic”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “server.private-address”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-bandwidth”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “kademlia.operator.wallet”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “kademlia.external-address”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file key {“Process”: “storagenode-updater”, “Key”: “storage.allocated-disk-space”}
2022-08-26T09:16:08.691Z INFO Invalid configuration file value for key {“Process”: “storagenode-updater”, “Key”: “log.caller”}
2022-08-26T09:16:08.692Z INFO Anonymized tracing enabled {“Process”: “storagenode-updater”}
2022-08-26T09:16:08.694Z INFO Running on version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.62.3”}
2022-08-26T09:16:08.694Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}
2022-08-26T09:16:08.705Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2022-08-26T09:16:08.705Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2022-08-26T09:16:08.708Z INFO Operator email {“Process”: “storagenode”, “Address”: “xxx@xxx.xxx”}
2022-08-26T09:16:08.708Z INFO Operator wallet {“Process”: “storagenode”, “Address”: “xxxxxxxxx”}
2022-08-26T09:16:09.130Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode”, “Version”: “v1.62.3”}
2022-08-26T09:16:09.130Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode”}
2022-08-26T09:16:09.136Z INFO Current binary version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.62.3”}
2022-08-26T09:16:09.136Z INFO Version is up to date {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”}
2022-08-26T09:16:09.316Z INFO Telemetry enabled {“Process”: “storagenode”, “instance ID”: “12bQdVhou6LNaEviP9pQ4DPDA4FBhMrGjEDikjnDeznSR1r8gZ4”}
2022-08-26T09:16:09.326Z INFO db.migration.54 Add interval_end_time field to storage_usage db, backfill interval_end_time with interval_start, rename interval_start to timestamp {“Process”: “storagenode”}
Error: Error creating tables for master database on storagenode: migrate: UNIQUE constraint failed: storage_usage_new.timestamp, storage_usage_new.satellite_id
storj.io/storj/storagenode/storagenodedb.(*DB).Migration.func21:2028
storj.io/storj/private/migrate.Func.Run:307
storj.io/storj/private/migrate.(*Migration).Run.func1:197
storj.io/private/dbutil/txutil.withTxOnce:75
storj.io/private/dbutil/txutil.WithTx:36
storj.io/storj/private/migrate.(*Migration).Run:196
storj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:347
main.cmdRun:226
storj.io/private/process.cleanup.func1.4:378
storj.io/private/process.cleanup.func1:396
github.com/spf13/cobra.(*Command).execute:852
github.com/spf13/cobra.(*Command).ExecuteC:960
github.com/spf13/cobra.(*Command).Execute:897
storj.io/private/process.ExecWithCustomConfigAndLogger:93
main.main:479
runtime.main:255
2022-08-26 09:16:09,339 INFO exited: storagenode (exit status 1; not expected)
2022-08-26 09:16:10,340 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-08-26 09:16:10,342 INFO spawned: ‘storagenode’ with pid 60
2022-08-26 09:16:10,342 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-08-26T09:16:10.363Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2022-08-26T09:16:10.363Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2022-08-26T09:16:10.365Z INFO Operator email {“Process”: “storagenode”, “Address”: “xxx@xxx.xxx”}
2022-08-26T09:16:10.365Z INFO Operator wallet {“Process”: “storagenode”, “Address”: “xxxxxxxxx”}
2022-08-26T09:16:10.934Z INFO Telemetry enabled {“Process”: “storagenode”, “instance ID”: “12bQdVhou6LNaEviP9pQ4DPDA4FBhMrGjEDikjnDeznSR1r8gZ4”}
2022-08-26T09:16:10.942Z INFO db.migration.54 Add interval_end_time field to storage_usage db, backfill interval_end_time with interval_start, rename interval_start to timestamp {“Process”: “storagenode”}
Error: Error creating tables for master database on storagenode: migrate: UNIQUE constraint failed: storage_usage_new.timestamp, storage_usage_new.satellite_id
storj.io/storj/storagenode/storagenodedb.(*DB).Migration.func21:2028
storj.io/storj/private/migrate.Func.Run:307
storj.io/storj/private/migrate.(*Migration).Run.func1:197
storj.io/private/dbutil/txutil.withTxOnce:75
storj.io/private/dbutil/txutil.WithTx:36
storj.io/storj/private/migrate.(*Migration).Run:196
storj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:347
main.cmdRun:226
storj.io/private/process.cleanup.func1.4:378
storj.io/private/process.cleanup.func1:396
github.com/spf13/cobra.(*Command).execute:852
github.com/spf13/cobra.(*Command).ExecuteC:960
github.com/spf13/cobra.(*Command).Execute:897
storj.io/private/process.ExecWithCustomConfigAndLogger:93
main.main:479
runtime.main:255
2022-08-26 09:16:10,953 INFO exited: storagenode (exit status 1; not expected)
2022-08-26 09:16:12,956 INFO spawned: ‘storagenode’ with pid 78
2022-08-26T09:16:12.977Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2022-08-26T09:16:12.978Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2022-08-26T09:16:12.980Z INFO Operator email {“Process”: “storagenode”, “Address”: “xxx@xxx.xxx”}
2022-08-26T09:16:12.981Z INFO Operator wallet {“Process”: “storagenode”, “Address”: “xxxxxxxxx”}
2022-08-26T09:16:13.543Z INFO Telemetry enabled {“Process”: “storagenode”, “instance ID”: “12bQdVhou6LNaEviP9pQ4DPDA4FBhMrGjEDikjnDeznSR1r8gZ4”}
2022-08-26T09:16:13.552Z INFO db.migration.54 Add interval_end_time field to storage_usage db, backfill interval_end_time with interval_start, rename interval_start to timestamp {“Process”: “storagenode”}
Error: Error creating tables for master database on storagenode: migrate: UNIQUE constraint failed: storage_usage_new.timestamp, storage_usage_new.satellite_id
storj.io/storj/storagenode/storagenodedb.(*DB).Migration.func21:2028
storj.io/storj/private/migrate.Func.Run:307
storj.io/storj/private/migrate.(*Migration).Run.func1:197
storj.io/private/dbutil/txutil.withTxOnce:75
storj.io/private/dbutil/txutil.WithTx:36
storj.io/storj/private/migrate.(*Migration).Run:196
storj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:347
main.cmdRun:226
storj.io/private/process.cleanup.func1.4:378
storj.io/private/process.cleanup.func1:396
github.com/spf13/cobra.(*Command).execute:852
github.com/spf13/cobra.(*Command).ExecuteC:960
github.com/spf13/cobra.(*Command).Execute:897
storj.io/private/process.ExecWithCustomConfigAndLogger:93
main.main:479
runtime.main:255
2022-08-26 09:16:13,565 INFO exited: storagenode (exit status 1; not expected)
2022-08-26 09:16:16,570 INFO spawned: ‘storagenode’ with pid 95
2022-08-26T09:16:16.591Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2022-08-26T09:16:16.591Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2022-08-26T09:16:16.594Z INFO Operator email {“Process”: “storagenode”, “Address”: “xxx@xxx.xxx”}
2022-08-26T09:16:16.594Z INFO Operator wallet {“Process”: “storagenode”, “Address”: “xxxxxxxxx”}
2022-08-26T09:16:17.195Z INFO Telemetry enabled {“Process”: “storagenode”, “instance ID”: “12bQdVhou6LNaEviP9pQ4DPDA4FBhMrGjEDikjnDeznSR1r8gZ4”}
2022-08-26T09:16:17.204Z INFO db.migration.54 Add interval_end_time field to storage_usage db, backfill interval_end_time with interval_start, rename interval_start to timestamp {“Process”: “storagenode”}
Error: Error creating tables for master database on storagenode: migrate: UNIQUE constraint failed: storage_usage_new.timestamp, storage_usage_new.satellite_id
storj.io/storj/storagenode/storagenodedb.(*DB).Migration.func21:2028
storj.io/storj/private/migrate.Func.Run:307
storj.io/storj/private/migrate.(*Migration).Run.func1:197
storj.io/private/dbutil/txutil.withTxOnce:75
storj.io/private/dbutil/txutil.WithTx:36
storj.io/storj/private/migrate.(*Migration).Run:196
storj.io/storj/storagenode/storagenodedb.(*DB).MigrateToLatest:347
main.cmdRun:226
storj.io/private/process.cleanup.func1.4:378
storj.io/private/process.cleanup.func1:396
github.com/spf13/cobra.(*Command).execute:852
github.com/spf13/cobra.(*Command).ExecuteC:960
github.com/spf13/cobra.(*Command).Execute:897
storj.io/private/process.ExecWithCustomConfigAndLogger:93
main.main:479
runtime.main:255
2022-08-26 09:16:17,217 INFO exited: storagenode (exit status 1; not expected)
2022-08-26 09:16:18,218 INFO gave up: storagenode entered FATAL state, too many start retries too quickly
2022-08-26 09:16:19,219 WARN received SIGQUIT indicating exit request
2022-08-26 09:16:19,219 INFO waiting for processes-exit-eventlistener, storagenode-updater to die
a2022-08-26 09:16:19,221 INFO stopped: storagenode-updater (exit status 0)
2022-08-26 09:16:20,223 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

p.s. uptime robot, gives me notifications every 10-30min. about node started, node stopped, node started, node stopped.

I’m no expert, but looks like your node databases got corrupted.

Error: Error creating tables for master database on storagenode: migrate: UNIQUE constraint failed: storage_usage_new.timestamp, storage_usage_new.satellite_id

Let’s wait for someone with more expertise though.

1 Like

This is also correct. But the database is corrupted long time ago, and i could not rebuild it. The node was working fine with corrupted DB (except it did not showed correct info in dashboard). So i very hope, that this problem is not related to the corrupt db.

Found some more information. On this machine i have 3 nodes running + watchtower for updates. This crashed node, in the log show this:

“Version”: “v1.62.3”}

But two other nodes (that still running fine), both still running on version v1.61.1.
I very hope, that this is updater made some troubles for me.

1 Like

high chance that in 1.61.1 it was throwing a warning and in 1.62.3 it’s an error if nothing else was changed

That’s possible.
But how do i downgrade my node?

Just change docker command from :latest to concrete version, please find on docker hub. Before it make docker stop, then docker rm and type this command.

1 Like

Downgrading is just a short term fix, as you’ll not receive any more data if you’re too many versions behind.

1 Like

You need to re-create this database: https://support.storj.io/hc/en-us/articles/4403032417044-How-to-fix-database-file-is-not-a-database-error

1 Like

Thank you!! worked out!!!

p.s. current month earnings / total earned / total held amount
i see $0.00 / $0.00 / $0.00, but the node is old :slight_smile: is there any way to rebuild this info? or maybe there is another place where it is possible to see this info?

Unfortunately no, you deleted this database, contained needed historical info. But the next month it should calculate correctly. However, you will not be able to see a historical data for storage usage. If you recreated a database for bandwidth usage too, then the historical bandwidth usage will be missing too.

1 Like

This is probably no longer relevant, but you can’t downgrade from 1.62 to 1.61 or below because there has been a database migration.

3 Likes

Thank you @Alexey, well the most important, to have a healthy node, that is still alive :slight_smile: will survive without correct dashboard :slight_smile:

1 Like