Recreated storagenode now showing offline. Logs don't seem to show any errors

Hello,

I’ve been running a storagenode for a few months now and needed to update the wallet address so I could make use of ZkSync. It advises to update the config.yaml file which I have done. Upon restarting the container and browsing to the GUI I noticed the wallet address hadn’t changed so I removed the container and recreated it with the recommended command which passes email, wallet address etc.

Since doing this my node has been offline, I can’t even browse to the GUI anymore, it just refuses to connect.

Checking the logs it shows two warnings but it doesn’t seem to affect it running ;

Configuration loaded {“Location”: “/app/config/config.yaml”}
2021-09-23T08:38:31.518Z INFO Operator email {“Address”: “@."}
2021-09-23T08:38:31.518Z INFO Operator wallet {“Address”: "0x
*********"}
2021-09-23T08:38:34.970Z INFO Telemetry enabled {“instance ID”: "19K
*”}
2021-09-23T08:38:37.222Z INFO db.migration Database Version {“version”: 53}
2021-09-23T08:38:38.379Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock.
2021-09-23T08:38:39.190Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock.
2021-09-23T08:38:39.192Z INFO Node 19KA********** started
2021-09-23T08:38:39.192Z INFO Public server started on [::]:28967
2021-09-23T08:38:39.192Z INFO Private server started on 127.0.0.1:7778
2021-09-23T08:38:39.193Z INFO failed to sufficiently increase receive buffer size (was: 160 kiB, wanted: 2048 kiB, got: 320 kiB). See UDP Receive Buffer Size · lucas-clemente/quic-go Wiki · GitHub for details.
2021-09-23T08:38:39.193Z INFO trust Scheduling next refresh {“after”: “3h36m0.799510891s”}
2021-09-23T08:38:39.194Z WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 925675481984}
2021-09-23T08:38:39.195Z INFO bandwidth Performing bandwidth usage rollups

After those lines it displays numerous lines of this;
2021-09-23T08:44:53.292Z INFO piecestore upload started
2021-09-23T08:44:53.425Z INFO piecestore upload started
2021-09-23T08:44:53.538Z INFO piecestore uploaded
2021-09-23T08:44:54.295Z INFO piecedeleter delete piece sent to trash
2021-09-23T08:44:54.418Z INFO piecestore uploaded
2021-09-23T08:44:55.540Z INFO piecestore uploaded

is it just syncing all the data and will it come back up shortly? If that’s the case, it would be useful to get some sort of progression message or a simple message advising this.

Hi @hello,

Breaking down your post there are few points:

  1. You say the container start command has changed and since then the GUI (dashboard) isn’t working?
    This issue points to the start command not passing/opening the correct ports for the dashboard to be accessible.

  2. Warning about QUIC - failed to sufficiently increase receive buffer size
    UDP is still in testing but you can see this post about increasing the buffer size - Linux, "failed to sufficiently increase receive buffer size"

  3. Warning “piecestore:monitor Disk space is less than requested.”
    Normally this line is followed by an ERROR to say total size is less than minimum (500GB) so I’m not sure why this would be reporting a warning unless the usage db has a problem and needs to be checked.

  4. If there are no ERROR or FATAL entries in the log then you should be fine. The INFO entries just show normal usage.

I was thinking that too but I haven’t changed any of the default ports on the command and before I made the wallet address change, it was working fine for months. Here is the command for reference:

sudo docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 127.0.0.1:14002:14002
-e WALLET=“0x************”
-e EMAIL=“"."
-e ADDRESS="
.****ddns.com:28967
-e STORAGE=“1TB”
–mount type=bind,source="/home/pi/.local/share/storj/identity/storagenode",destination=/app/identity
–mount type=bind,source="/mnt/storj",destination=/app/config
–name storagenode storjlabs/storagenode:latest

When I was reading an article (I can’t seem to find it now) I saw that they suggested putting it 50GB lower than the actual disk size which I’ve not done.

running sudo docker ps -a shows the default ports are open:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c5086963be6 storjlabs/storagenode:latest “/entrypoint” 5 hours ago Up 6 minutes 127.0.0.1:14002->14002/tcp, 0.0.0.0:28967->28967/tcp, 0.0.0.0:28967->28967/udp storagenode

24ecc61ec5cd storjlabs/watchtower “/watchtower storage…” 6 months ago Up 6 minutes watchtower

I’ve seen some people suggest I need to change this line:
server address of the api gateway and frontend app
console.address: 127.0.0.1:14002

to: console.address: 0.0.0.0:14002 and then configure port forwarding.
I’m 100% certain I had no port forwarding setup locally before hand when it was working

Here are the firewall rules to confirm that is also setup correctly:
*****@RedBroker:~ $ sudo ufw status
Status: active

To Action From


22/tcp LIMIT Anywhere
1880 ALLOW Anywhere
1883 ALLOW Anywhere
3000 ALLOW Anywhere
28967 ALLOW Anywhere
14002 ALLOW Anywhere
22/tcp (v6) LIMIT Anywhere (v6)
1880 (v6) ALLOW Anywhere (v6)
1883 (v6) ALLOW Anywhere (v6)
3000 (v6) ALLOW Anywhere (v6)
28967 (v6) ALLOW Anywhere (v6)
14002 (v6) ALLOW Anywhere (v6)

In case of docker it has no effect. See storj/cmd/storagenode/entrypoint at b97479e36b6cb30998522c21a5cc9c6cd6aeeb6f · storj/storj · GitHub

The dashboard will be available only on localhost if you still have -p 127.0.0.1:14002:14002.
To have an access from other devices use this guide: How to remote access the web dashboard - Storj Docs

Thank you! Removing 127.0.0.1 has worked

1 Like