Yesterday I had to take down my node unexpectedly for about 2h.
In the meantime had to change DNS and updated the Docker container to version 1.10.1.
All operations went pretty smoothly, but the restart my node is showing as offline.
The tail of the log looks like this:
2020-08-31T07:41:14.172Z INFO Configuration loaded {"Location": "/app/config/config.yaml"}
2020-08-31T07:41:14.194Z INFO Operator email {"Address": "<redacted>"}
2020-08-31T07:41:14.194Z INFO Operator wallet {"Address": "0x<redacted>"}
2020-08-31T07:41:15.486Z INFO Telemetry enabled
2020-08-31T07:41:15.630Z INFO db.migration Database Version {"version": 43}
2020-08-31T07:41:15.919Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock.
2020-08-31T07:41:17.204Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock.
2020-08-31T07:41:17.204Z INFO trust Scheduling next refresh {"after": "8h23m36.709841089s"}
2020-08-31T07:41:17.205Z INFO Node <redacted> started
2020-08-31T07:41:17.205Z INFO Public server started on [::]:28967
2020-08-31T07:41:17.205Z INFO Private server started on 127.0.0.1:7778
2020-08-31T07:41:17.205Z INFO bandwidth Performing bandwidth usage rollups
2020-08-31T08:41:17.383Z INFO bandwidth Performing bandwidth usage rollups
2020-08-31T09:41:17.282Z INFO bandwidth Performing bandwidth usage rollups
2020-08-31T10:41:17.267Z INFO bandwidth Performing bandwidth usage rollups
2020-08-31T11:41:17.205Z INFO bandwidth Performing bandwidth usage rollups
2020-08-31T12:41:17.225Z INFO bandwidth Performing bandwidth usage rollups
From the dashboard it looks like my last contact happened a little “too long ago”:
Port 28967 is open (checked with suggested link)
Fixed IP and custom DNS. DNS correctly poi t to my IP. Firewall did not change, and was working before.
Not really sure where/what to look for more than this…
This just happened to one of my nodes. I took it down for a couple of hours and when I brought it back online the dashboard reports that is is offline.
2020-08-31T22:23:19.865Z INFO Configuration loaded {"Location": "/app/config/config.yaml"}
2020-08-31T22:23:19.880Z INFO Operator email {"Address": "REDACTED"}
2020-08-31T22:23:19.880Z INFO Operator wallet {"Address": "REDACTED"}
2020-08-31T22:23:20.204Z INFO Telemetry enabled
2020-08-31T22:23:20.217Z INFO db.migration Database Version {"version": 43}
2020-08-31T22:23:20.824Z INFO preflight:localtime start checking local system clock with trusted satellites' system clock.
2020-08-31T22:23:21.391Z INFO preflight:localtime local system clock is in sync with trusted satellites' system clock.
2020-08-31T22:23:21.391Z INFO bandwidth Performing bandwidth usage rollups
2020-08-31T22:23:21.391Z INFO trust Scheduling next refresh {"after": "5h7m57.232603171s"}
2020-08-31T22:23:21.392Z INFO Node [REDACTED] started
2020-08-31T22:23:21.392Z INFO Public server started on [::]:28967
2020-08-31T22:23:21.392Z INFO Private server started on 127.0.0.1:7778
If you moved the node but didn’t change the port forwarding rule it will be an issue - each device have an own local IP, so you should update the rule too.
Then please check what else in the network configuration is different. Perhaps you have an integrated firewall on the second system. Then you should create an inbound rule for the port. You also need to have a granted outbound access too.
The second thing - when you move the disk with data have you moved the tied identity too?
The node was moved to a new location, but the ISP is the same (basically just different IP… should not be an issue)
The node has always being running Dockerized and on the same hardware (just shut it down, moved and fired back up)
Nothing else in the network infrastructure changed (same router, same firewall, same everything… only port forward had to be configured again on the new ISP modem/router… but it does work)
Other people (or myself for what that matters can access my infrastructure from remote, outside on my LAN)
Do you have any further suggestion to avoid for me any unnecessary downtime or for keeping being penalized for… no reason?
Am not sure am following you. What do you mean try another browser?
Am checking the node directly from the CLI, no browser is involved:
Storage Node Dashboard ( Node Version: v1.10.1 )
======================
ID <redacted>
Last Contact OFFLINE
Uptime 30m49s
Available Used Egress Ingress
Bandwidth N/A 0 B 0 B 0 B (since Sep 1)
Disk 2.9 TB 6.1 TB
Internal 127.0.0.1:7778
External <redacted>:28967
My data is uncapped and have FTTH 1Gbps/1Gbps (at current measurement 995Mbps/975Mbps).