I’ve been running a storagenode for a few months now and needed to update the wallet address so I could make use of ZkSync. It advises to update the config.yaml file which I have done. Upon restarting the container and browsing to the GUI I noticed the wallet address hadn’t changed so I removed the container and recreated it with the recommended command which passes email, wallet address etc.
Since doing this my node has been offline, I can’t even browse to the GUI anymore, it just refuses to connect.
Checking the logs it shows two warnings but it doesn’t seem to affect it running ;
Configuration loaded {“Location”: “/app/config/config.yaml”}
2021-09-23T08:38:31.518Z INFO Operator email {“Address”: “@."}
2021-09-23T08:38:31.518Z INFO Operator wallet {“Address”: "0x*********"}
2021-09-23T08:38:34.970Z INFO Telemetry enabled {“instance ID”: "19K*”}
2021-09-23T08:38:37.222Z INFO db.migration Database Version {“version”: 53}
2021-09-23T08:38:38.379Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-09-23T08:38:39.190Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-09-23T08:38:39.192Z INFO Node 19KA********** started
2021-09-23T08:38:39.192Z INFO Public server started on [::]:28967
2021-09-23T08:38:39.192Z INFO Private server started on 127.0.0.1:7778
2021-09-23T08:38:39.193Z INFO failed to sufficiently increase receive buffer size (was: 160 kiB, wanted: 2048 kiB, got: 320 kiB). See UDP Receive Buffer Size · lucas-clemente/quic-go Wiki · GitHub for details.
2021-09-23T08:38:39.193Z INFO trust Scheduling next refresh {“after”: “3h36m0.799510891s”}
2021-09-23T08:38:39.194Z WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 925675481984}
2021-09-23T08:38:39.195Z INFO bandwidth Performing bandwidth usage rollups
After those lines it displays numerous lines of this;
2021-09-23T08:44:53.292Z INFO piecestore upload started
2021-09-23T08:44:53.425Z INFO piecestore upload started
2021-09-23T08:44:53.538Z INFO piecestore uploaded
2021-09-23T08:44:54.295Z INFO piecedeleter delete piece sent to trash
2021-09-23T08:44:54.418Z INFO piecestore uploaded
2021-09-23T08:44:55.540Z INFO piecestore uploaded
is it just syncing all the data and will it come back up shortly? If that’s the case, it would be useful to get some sort of progression message or a simple message advising this.
You say the container start command has changed and since then the GUI (dashboard) isn’t working?
This issue points to the start command not passing/opening the correct ports for the dashboard to be accessible.
Warning “piecestore:monitor Disk space is less than requested.”
Normally this line is followed by an ERROR to say total size is less than minimum (500GB) so I’m not sure why this would be reporting a warning unless the usage db has a problem and needs to be checked.
If there are no ERROR or FATAL entries in the log then you should be fine. The INFO entries just show normal usage.
I was thinking that too but I haven’t changed any of the default ports on the command and before I made the wallet address change, it was working fine for months. Here is the command for reference:
When I was reading an article (I can’t seem to find it now) I saw that they suggested putting it 50GB lower than the actual disk size which I’ve not done.
running sudo docker ps -a shows the default ports are open:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c5086963be6 storjlabs/storagenode:latest “/entrypoint” 5 hours ago Up 6 minutes 127.0.0.1:14002->14002/tcp, 0.0.0.0:28967->28967/tcp, 0.0.0.0:28967->28967/udp storagenode
24ecc61ec5cd storjlabs/watchtower “/watchtower storage…” 6 months ago Up 6 minutes watchtower
I’ve seen some people suggest I need to change this line:
server address of the api gateway and frontend app
console.address: 127.0.0.1:14002
to: console.address: 0.0.0.0:14002 and then configure port forwarding.
I’m 100% certain I had no port forwarding setup locally before hand when it was working
Here are the firewall rules to confirm that is also setup correctly:
*****@RedBroker:~ $ sudo ufw status
Status: active