Cannot access the web dashboard of my 2nd node

Hi there,

I’ve started a new, second node in the same subnet and it’s working fine - except the dashboard, which I cannot access. The logs are looking good, the node itself is running.

I’ve increased all internal ports +1, including the private server one in config.yaml:

console.address: :14003
server.address: :28968
server.private-address: 127.0.0.1:7779

Relevant part of my run command:

docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 57290:28968/tcp \
    -p 57290:28968/udp \
    -p 57292:14003 \
...
    -e ADDRESS="ddns-domain.com:57290" \
    -e STORAGE="9.5TB" \
...
    --mount type=bind,source="...",destination=/app/identity \
    --mount type=bind,source="...",destination=/app/config \
    --mount type=bind,source="...",destination=/app/dbs \
    --mount type=bind,source="...",destination=/app/logs \
    --name sn2 storjlabs/storagenode:latest

Yes, I know, I shouldn’t make the dashboard public - it’s for testing and I cannot access it right now: ddns-domain.com:57292

The corresponding TCP port 57292 is opened, exactly the same way as for SN1.

Any idea?

docker ps looks kinda strange:

a54021aea904   storjlabs/storagenode:latest   "/entrypoint"            29 minutes ago   Up 29 minutes         14002/tcp, 28967/tcp, 0.0.0.0:57292->14003/tcp, :::57292->14003/tcp, 0.0.0.0:57290->28968/tcp, 0.0.0.0:57290->28968/udp, :::57290->28968/tcp, :::57290->28968/udp   sn2
ee9dd7402c91   storjlabs/storagenode:latest   "/entrypoint"            9 days ago       Up 9 days             0.0.0.0:57282->14002/tcp, :::57282->14002/tcp, 0.0.0.0:57280->28967/tcp, 0.0.0.0:57280->28967/udp, :::57280->28967/tcp, :::57280->28967/udp                         sn1

Why does sn2 uses ports 14002/tcp, 28967/tcp ?
That’s the only strange thing I can see.

127.0.0.1 is Loopback

you will need to define the LAN IP address

you can simply add it infront of the port, so it becomes.
-p your.lan.ip.addr:57292:14003

your.lan.ip.addr being something like 192.168.1.10

or change it in the config.yaml
i bet you done something like that for the first node.

node 1:

# server address of the api gateway and frontend app
# console.address: 127.0.0.1:14002

node 2:

# server address of the api gateway and frontend app
console.address: :14003

so should be this, because for node 1 it is used as standard ip:port 127.0.0.1:14002 although it is commented out?

# server address of the api gateway and frontend app
console.address: 127.0.0.1:14003

not working.

Also tried:

# server address of the api gateway and frontend app
console.address: 192.168.178.70:14003

anyway not working, the dashboard.

and still it’s using both standard ports in docker ps:

057f11dda0cf   storjlabs/storagenode:latest   "/entrypoint"            2 minutes ago   Up 2 minutes          14002/tcp, 28967/tcp, 0.0.0.0:57292->14003/tcp, :::57292->14003/tcp, 0.0.0.0:57290->28968/tcp, 0.0.0.0:57290->28968/udp, :::57290->28968/tcp, :::57290->28968/udp   sn2
ee9dd7402c91   storjlabs/storagenode:latest   "/entrypoint"            9 days ago      Up 9 days             0.0.0.0:57282->14002/tcp, :::57282->14002/tcp, 0.0.0.0:57280->28967/tcp, 0.0.0.0:57280->28967/udp, :::57280->28967/tcp, :::57280->28967/udp                         sn1

Stop picking on config.yaml :wink:

Container works on ports to the right of the colon

[...]
    -p 57290:28967/tcp \
    -p 57290:28967/udp \
    -p 57292:14002 \
[...]

Then web console 192.168.178.70:57292
And on the router open (enable) a port 57290 for TCP and UDP
and forward to the port 57290 on the PC 192.168.178.70

1 Like

Also with a second node on the same machine? internal standard ports configured for both the same? (right side)

It’s working for the node except the dashboard.

Just going through the doc: How to add an additional drive? | Storj Docs

This is how it worked two years ago when I still had nodes, I never modified the config.yaml

@peem is right - if you did not use --network host in your docker run commands, you doesn’t need to change the internal port.
However, if you really want, then you need to change it in the docker run command with option --console.address=:14003, it should be specified after the image name.
This is because :14002 is hardcoded in the entrypoint: storj/entrypoint at b60c3ea0a151fccb158f3f6207957fae2a819d1f · storj/storj · GitHub

2 Likes

Thank you, that worked well.

1 Like

so the config.yaml setting was ignored for the console.address?

does that mean he would have been able to access it on the default port? atleast over lan

congrats, i guess now is the time you should know that exposing home webservers to the internet is generally not a great idea… :smiley: sorry

Yes, the command line options overrides parameters loaded from the config file. In case of storagenode docker image - the setting for console.address is hardcoded in the entrypoint.

No. Even with --network host, because in the last case the first node has used this port already. With a default network it will be not accessible too, because port mapping was not mention the default port on the right side of the port mapping.

1 Like

ah right because they have the same LAN IP gotcha.

Hello, how is this done? I’m also having trouble setting up a second node.

Do you mean the solution is to change the config.yaml to
image

Then something like:
docker run -d --restart unless-stopped --stop-timeout 300
-p 28968:28967/tcp
-p 28968:28967/udp
-p 192.168.1.xx:14003:14003
-e WALLET=“xxxxxxxxxxxxxxxx”
-e EMAIL=“xxxxxxxxxxxxxxxxxx”
-e ADDRESS=“xxxxxxxxxxxxxxx:28968”
-e STORAGE=“xxxxxxxxxxxxTB”
–user $(id -u):$(id -g)
–mount type=bind,source=/home/xxxx/.local/share/storj/identity/storagenode2,destination=/app/identity
–mount type=bind,source=/mnt/xxxxx,destination=/app/config
–name storagenode2 storjlabs/storagenode:latest

What would be the code to change it in the docker run command?

I’ve got all the ports forwarded, but I can’t get into the dashboard of the 2nd node and it looks like the 2nd node just keeps on restarting

I’m running ubuntu

unless if you have changed the default of the web dashboard in the config.yaml
it should most likely be.
-p 192.168.1.xx:14003:14001

the first one is the lan port, the second is the internal docker network port.
which you should really never need to change when using the docker run command.
makes things easier to troubleshoot.

Hello @Boladi,
Welcome to the forum!

should be

-p 192.168.1.xx:14003:14002

@SGC the default port is 14002

1 Like

Wow! Thank you for the fast reply!

When I make the change to -p 192.168.1.xx:14003:14002 then I can’t get to the 2nd node dashboard by 192.168.1.xx:14003 on the host computer or on the network.

When I check the 2nd node with “docker ps -a” it looks like the 2nd node just keeps restarting

The first node is fine. I tried changing the config.yaml file and had to change permissions to make the change, I didn’t change the permissions back to root. After changing the config.yaml in the drive I want to use for the 2nd node, there was no change, and it seems the docker takes precedence anyway but I have no idea.

Thank you again for the help, I’ve spent a lot of time trying to figure this out.

FYI - In here: How to add an additional drive? | Storj Docs
Under Docker version#, there is a line of code that says -p 172.0.0.1:14003:14002 instead of 127

Please post the last 20 lines from the log: How do I check my logs? | Storj Docs

thanks! fixed

Thank you again for the help, here are the last 20 lines and “ps -a”, doesn’t look good I think.

I just started it up again the same way I’ve been doing it because the second one just keeps restarting.

My next move is to reformat and redo the static mount of the 2nd drive and start from scratch unless you have any ideas

root@xxxxxxxxxxxxx:/home/xxxxxxxxxxxxxx# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
843debc6bf66 storjlabs/storagenode:latest “/entrypoint” 4 minutes ago Up 1 second 192.168.1.18:14003->14002/tcp, 0.0.0.0:28968->28967/tcp, 0.0.0.0:28968->28967/udp, :::28968->28967/tcp, :::28968->28967/udp storagenode2a
e18b8789d267 storjlabs/storagenode:latest “/entrypoint” 2 days ago Up 2 days 192.168.1.18:14002->14002/tcp, 0.0.0.0:28967->28967/tcp, 0.0.0.0:28967->28967/udp, :::28967->28967/tcp, :::28967->28967/udp storagenode
root@xxxxxxxxxxxx:/home/ss1# docker logs --tail 20 storagenode2a
2022-09-21T04:02:30.187Z INFO Public server started on [::]:28967 {“Process”: “storagenode”}
2022-09-21T04:02:30.187Z INFO Private server started on 127.0.0.1:7779 {“Process”: “storagenode”}
2022-09-21T04:02:30.187Z INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See UDP Receive Buffer Size · lucas-clemente/quic-go Wiki · GitHub for details. {“Process”: “storagenode”}
2022-09-21T04:02:30.661Z INFO trust Scheduling next refresh {“Process”: “storagenode”, “after”: “7h59m56.057540571s”}
2022-09-21T04:02:30.671Z INFO bandwidth Performing bandwidth usage rollups {“Process”: “storagenode”}
2022-09-21T04:02:30.682Z ERROR services unexpected shutdown of a runner {“Process”: “storagenode”, “name”: “piecestore:monitor”, “error”: “piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mG6QuZSQktP3sobBYf2DjyZhgWwkXN735cX3DrafKWTaa3Tcf) does not match running node’s ID (1LkUzhB9JB5sSPeKPxgCZkgziNdW1pjP2MRTCReLmS2jn2LAuj)”, “errorVerbose”: “piecestore monitor: error verifying location and/or readability of storage directory: node ID in file (12mG6QuZSQktP3sobBYf2DjyZhgWwkXN735cX3DrafKWTaa3Tcf) does not match running node’s ID (1LkUzhB9JB5sSPeKPxgCZkgziNdW1pjP2MRTCReLmS2jn2LAuj)\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:133\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:130\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2022-09-21T04:02:30.685Z ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189”}
2022-09-21T04:02:30.690Z INFO contact:service context cancelled {“Process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
2022-09-21T04:02:30.688Z ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189”}
2022-09-21T04:02:30.690Z INFO contact:service context cancelled {“Process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2022-09-21T04:02:30.688Z ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189”}
2022-09-21T04:02:30.691Z INFO contact:service context cancelled {“Process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2022-09-21T04:02:30.688Z ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189”}
2022-09-21T04:02:30.691Z INFO contact:service context cancelled {“Process”: “storagenode”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”}
2022-09-21T04:02:30.689Z ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189”}
2022-09-21T04:02:30.691Z INFO contact:service context cancelled {“Process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2022-09-21T04:02:30.689Z ERROR nodestats:cache Get pricing-model/join date failed {“Process”: “storagenode”, “error”: “context canceled”}
2022-09-21T04:02:30.690Z ERROR contact:service ping satellite failed {“Process”: “storagenode”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:189”}
2022-09-21T04:02:30.694Z INFO contact:service context cancelled {“Process”: “storagenode”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”}
2022-09-21T04:02:30.738Z ERROR piecestore:cache error getting current used space: {“Process”: “storagenode”, “error”: “context canceled; context canceled; context canceled”, “errorVerbose”: “group:\n— context canceled\n— context canceled\n— context canceled”}

Seems you did setup with the one identity, but trying to start with another. Please check your --mount options in the docker run command to make sure that you did not point to the identity of the first node or to the data location of your first node.
If it’s a new node and a new generated identity, then you need to clean the data location and do setup one more time (and never repeat unless you will have a new identity and new data location).
You should not use clone of the identity of your first node, otherwise it will be disqualified.

Ahhh!!! I’ve got it going. The problem was that I never did the docker run SETUP because the page said only do it once. Now I understand Storage Node | Storj Docs to mean that I only need to run the setup once per node.

Now everything is working as I thought it should, thank you!!!