Storage Node Offline for 737847 Days

I setup my storage node at the beginning of the month and it seemed to be working fine initially. The status page showed online and I could see a bit of data flowing after a few days.

However I’ve just checked the status and see that it’s reporting last seen 17708333h 38m ago.

Under the Suspension & Audit section all sattelites say 100% except for us2.tardigrade.io, this one is down to 80%.

Confirmed that a) Storj port is accessibly externally, and b) my DNS is resolving correctly.

Here’s the startup sequence from my storagenode.log file:

2021-02-27T00:22:34.518-0500	INFO	Stop/Shutdown request received.
2021-02-27T00:22:34.518-0500	INFO	contact:service	context cancelled	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2021-02-27T00:22:34.518-0500	INFO	contact:service	context cancelled	{"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2021-02-27T00:22:34.518-0500	INFO	contact:service	context cancelled	{"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2021-02-27T00:22:34.518-0500	INFO	contact:service	context cancelled	{"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2021-02-27T00:22:34.518-0500	INFO	contact:service	context cancelled	{"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2021-02-27T00:22:34.518-0500	INFO	contact:service	context cancelled	{"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2021-02-27T00:22:35.927-0500	INFO	Configuration loaded	{"Location": "C:\\Program Files\\Storj\\Storage Node\\config.yaml"}
2021-02-27T00:22:35.962-0500	INFO	Operator email	{"Address": "****"}
2021-02-27T00:22:35.962-0500	INFO	Operator wallet	{"Address": "****"}
2021-02-27T00:22:36.246-0500	INFO	Telemetry enabled	{"instance ID": "12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS"}
2021-02-27T00:22:36.263-0500	INFO	db.migration	Database Version	{"version": 50}
2021-02-27T00:22:36.552-0500	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.
2021-02-27T00:22:37.200-0500	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.
2021-02-27T00:22:37.201-0500	INFO	Node 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS started
2021-02-27T00:22:37.201-0500	INFO	Public server started on [::]:28967
2021-02-27T00:22:37.201-0500	INFO	Private server started on 127.0.0.1:7778
2021-02-27T00:22:37.201-0500	INFO	bandwidth	Performing bandwidth usage rollups
2021-02-27T00:22:37.201-0500	INFO	trust	Scheduling next refresh	{"after": "5h6m18.244586993s"}
2021-02-27T00:22:37.468-0500	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite error: failed to dial storage node (ID: 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS) at address knewman.rocks:28967: rpc: tls peer certificate verification error: not signed by any CA in the whitelist: CA cert", "errorVerbose": "ping satellite error: failed to dial storage node (ID: 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS) at address knewman.rocks:28967: rpc: tls peer certificate verification error: not signed by any CA in the whitelist: CA cert\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-02-27T00:22:37.582-0500	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite error: failed to dial storage node (ID: 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS) at address knewman.rocks:28967: rpc: tls peer certificate verification error: not signed by any CA in the whitelist: CA cert", "errorVerbose": "ping satellite error: failed to dial storage node (ID: 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS) at address knewman.rocks:28967: rpc: tls peer certificate verification error: not signed by any CA in the whitelist: CA cert\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-02-27T00:22:37.653-0500	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite error: failed to dial storage node (ID: 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS) at address knewman.rocks:28967: rpc: tls peer certificate verification error: not signed by any CA in the whitelist: CA cert", "errorVerbose": "ping satellite error: failed to dial storage node (ID: 12RtWmRYjf2hnhd51evV7XRraEuKJKEf7v3xr6hnfJk2PqmbvGS) at address knewman.rocks:28967: rpc: tls peer certificate verification error: not signed by any CA in the whitelist: CA cert\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Those contact:service errors keep flooding in. Not sure what they mean and I didn’t see much on it from a quick search.

Meaning that your identity is not signed. Please, request an authorization token: https://documentation.storj.io/before-you-begin/auth-token and sign the identity: https://documentation.storj.io/dependencies/identity#authorize-the-identity
then confirm that it’s now signed: https://documentation.storj.io/dependencies/identity#confirm-the-identity

If you moved your identity from the default location, please use an --identity-dir option in the identity binary to specify your new path to be correctly signed. And use the same path in the confirmation command.

Thanks, it was the identity files. I retrieved from my backup and replaced. Not sure what happened with the ones on my node, but ID confirms signed, node shows online status and log file is clear now.

1 Like