Need help starting 2nd node on same machine

Hi,

I am using a Raspberry Pi 4 with 8gb ram, debian OS. Second node identity was accepted.

On my local network, I can see 192.xx.xx.xx:14002 and node 1is good, but 192.xx.xx.xx:14003 and the page says node 2 is offline.

When I do docker ps -a it shows that it is running but the dashboard shows OFFLINE.

What can be the problem? Here is my configs, first node one and then node two that are one the same machine.

and the first node config is:

First node

docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 28967:28967 \
    -p 14002:14002 \
    -e WALLET="?????" \
    -e EMAIL="??????" \
    -e ADDRESS="???????" \
    -e STORAGE="1800GB" \
    --mount type=bind,source="/home/pi/.local/share/storj/identity/storagenode-penknife",destination=/app/identity \
    --mount type=bind,source="/mnt/penknife",destination=/app/config \
    --name storagenode-penknife storjlabs/storagenode:latest

Second node

docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 28968:28967 \
    -p 14003:14002 \
    -e WALLET="???????" \
    -e EMAIL="???????" \
    -e ADDRESS="???????" \
    -e STORAGE="1800GB" \
    --mount type=bind,source="/home/pi/.local/share/storj/identity/storagenode-eneloop",destination=/app/identity-eneloop \
    --mount type=bind,source="/mnt/eneloop",destination=/app/config-eneloop \
    --name storagenode-eneloop storjlabs/storagenode:latest

When I run

docker logs --tail 100 storagenode-eneloop

I get

2020-07-28T20:47:37.686Z        INFO    Configuration loaded    {"Location": "/app/config/config.yaml"}
2020-07-28T20:47:37.691Z        INFO    Operator email  {"Address": "?????????"}
2020-07-28T20:47:37.691Z        INFO    Operator wallet {"Address": "?????????"}
2020-07-28T20:47:38.496Z        INFO    Telemetry enabled
2020-07-28T20:47:38.515Z        INFO    db.migration    Database Version        {"version": 42}
2020-07-28T20:47:39.107Z        INFO    preflight:localtime     start checking local system clock with trusted satellites' system clock.
2020-07-28T20:47:44.973Z        INFO    preflight:localtime     local system clock is in sync with trusted satellites' system clock.
2020-07-28T20:47:44.974Z        INFO    bandwidth       Performing bandwidth usage rollups
2020-07-28T20:47:44.974Z        INFO    trust   Scheduling next refresh {"after": "5h6m45.450694457s"}
2020-07-28T20:47:44.975Z        INFO    Node ??????????? started
2020-07-28T20:47:44.975Z        INFO    Public server started on [::]:28967
2020-07-28T20:47:44.975Z        INFO    Private server started on 127.0.0.1:7778

did you forwared 28968 to node ip also?

1 Like

I have forwarded the port from the router, just like I did for the first node.

Both nodes are using the same DDNS name, is that a problem? Do they need to be different?

ddns can be same, how is your forward looks like?

1 Like

Should be

-e ADDRESS="???????:28968" \
1 Like

Port forwarding details here

cf86f285-ab2c-4f87-ad1a-a3b1bfbcc0d0

My config for node 2 usese this -e ADDRESS=“DDNS:28968” \

okay, ports really open, I used my VPS to check my home port connection.

articulateape@server:~$ nc -vz MY-IP 28967
Connection to MY-IP 28967 port [tcp/*] succeeded!
articulateape@server:~$ nc -vz MY IP 28968
Connection to MY-IP 28968 port [tcp/*] succeeded!

Then please, check your second identity:
https://documentation.storj.io/dependencies/identity#confirm-the-identity

1 Like

I checked the identity and the numbers are 1 and 2, I need to do it again.

I made the identity with Win Powershell, does the command need to match the node name like this?

./identity.exe create storagenode-eneloop

Not necessarily, but it’s much convenient to use the same name. The main point - it should be different from the current one, if you create it on the same machine and did not move the previous.

The name in the identity create NAME is a name for the folder. For Windows it would be %APPDATA%\Storj\Identity\NAME

1 Like

The node is working now, what a great feeling!

The CLI shows shows it is online, and the dashboard is also working. No ingress or egress but I think that will come later.

Thank you @Alexey and @Vadim for your help

edit: I have ingress now, feels great!

2 Likes

(moved this post to a more related topic)
Good afternoon All,

@greener, @kevink, @BrightSilence

Hoping I could get someone to confirm that I have things laid out correctly for a new machine that I’m planning on running two more nodes on. These will be nodes 5 and 6, and this is the first time I’ll be running more than one node on the same machine. Also for a little background, all first four nodes seem to be working properly, so what I have below is based on what I’ve done in the past…

Planned docker start commands look like this:
Storj Node 5:
sudo docker run -d --restart unless-stopped --stop-timeout 300
-p 28971:28967
-p 127.0.0.1:14002:14002
-e WALLET=“XXX”
-e EMAIL=“YYY”
-e ADDRESS=“ZZZ:28971”
-e STORAGE=“0.5TB”
–mount type=bind,source="/mnt/Storj1/storagenode/Identity/storagenode",destination=/app/identity
–mount type=bind,source="/mnt/Storj1/storagenode",destination=/app/config
–name storagenode1 storjlabs/storagenode:latest

Storj Node 6:
sudo docker run -d --restart unless-stopped --stop-timeout 300
-p 28972:28967
-p 127.0.0.1:14003:14002
-e WALLET=“XXX”
-e EMAIL=“YYY”
-e ADDRESS=“ZZZ:28972”
-e STORAGE=“0.5TB”
–mount type=bind,source="/mnt/Storj2/storagenode/Identity/storagenode",destination=/app/identity
–mount type=bind,source="/mnt/Storj2/storagenode",destination=/app/config
–name storagenode2 storjlabs/storagenode:latest

Then in my router, I have the following:

Things look right?

Then what about for my storj-exporter docker commands?
sudo docker run -d --link=storagenode1 --name=storj-exporter1 -p 9651:9651 anclrii/storj-exporter:latest
sudo docker run -d --link=storagenode2 --name=storj-exporter2 -p 9652:9652 anclrii/storj-exporter:latest

And lastly, my promethus.yml (truncated with only target info listed):
static_configs:

  • targets: [‘localhost:9090’]
  • job_name: StorjNode1
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
    • targets: [“192.168.1.174:9651”]
      labels:
      instance: “Node1”
  • job_name: StorjNode2
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
    • targets: [“192.168.1.112:9651”]
      labels:
      instance: “Node2”
  • job_name: StorjNode3
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
    • targets: [“192.168.1.200:9651”]
      labels:
      instance: “Node3”
  • job_name: StorjNode4
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
    • targets: [“192.168.1.165:9651”]
      labels:
      instance: “Node4”
  • job_name: StorjNode5
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
    • targets: [“192.168.1.114:9651”]
      labels:
      instance: “Node5”
  • job_name: StorjNode6
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
    • targets: [“192.168.1.114:9652”]
      labels:
      instance: “Node6”

Ports look fine of first glance. You might want to post your code in code blocks though. The forum strips some stuff out otherwise.

Don’t forget to run the new setup command first. Please refer to documentation.storj.io for the details.

1 Like

looks good to me. good luck

1 Like

This should be something like

sudo docker run -d --link=storagenode1 --name=storj-exporter1 -p 9651:9651 -e STORJ_HOST_ADDRESS=storagenode1 anclrii/storj-exporter:latest
sudo docker run -d --link=storagenode2 --name=storj-exporter2 -p 9652:9651 -e STORJ_HOST_ADDRESS=storagenode2 anclrii/storj-exporter:latest

Notice that in -p 9652:9651 the internal exporter port is always 9651. Also if storagenode container has a different name you might need to override STORJ_HOST_ADDRESS as well.

1 Like

so then in prometheus.yml file, would that storagenode2 have a target of IP:9651 or IP:9652?

BTW, have you have ever considered registering as a content creator with Brave and connecting it to your github page? I ventured over the other day to send a tip your way, but it said you weren’t registered. Just thought I’d ask, as I know you have many places where you have a “donate” option listed.

It’s still 9652 in prometheus. Basically the script inside every exporter container listens on port 9651, but we can’t expose the same port for multiple containers or processes. With -p 9651:9651 docker will expose this internal port on host port 9651. With -p 9652:9651 docker will expose 9652 instead.

I’ll check it out thanks, no idea what it is :slight_smile:

1 Like

oh… i missed that one, sorry.

Personally I use docker-compose to have all containers from one storagenode within the same configuration file and network. Then I don’t need to expose any ports either and in the prometheus.yml I can use the container name because I connect all of them to a prometheus network (because I start prometheus with docker-compose too):

version: '3.7'

services:
  storagenode:
    image: storjlabs/storagenode:latest
    container_name: storagenode1
    user: "1000:1000"
    restart: unless-stopped
    ports:
      - 14002:14002
      - 28967:28967
    environment:
      - WALLET=
      - EMAIL=
      - ADDRESS=
      - STORAGE=7TB
    volumes:
      - type: bind
        source: /media/STORJ1/STORJ
        target: /app/config
      - type: bind
        source: /media/STORJ1/identity
        target: /app/identity
      - type: bind
        source: /sharedfolders/storjDB/storj1
        target: /app/dbs
      - type: bind
        source: /sharedfolders/storjLogs/storj1
        target: /app/logs
    networks:
      - default
    stop_grace_period: 300s

  storj-exporter:
    image: storj-exporter
    container_name: storj-exporter1
    user: "1000:1000"
    restart: unless-stopped
    environment:
      - STORJ_HOST_ADDRESS=storagenode1
      - STORJ_API_PORT=14002
      - STORJ_EXPORTER_PORT=9651
    networks:
      - default

  storj-log-exporter:
    image: kevinkk525/storj-log-exporter:latest
    container_name: storj-log-exporter1
    user: "1000:1000"
    restart: unless-stopped
    volumes:
      - type: bind
        source: /sharedfolders/storjLogs/storj1
        target: /app/logs
    command: -config /app/config.yml 
    networks:
      - default

networks:
  default:

networks:
  default:
    external:
      name: prometheus_default
version: '3.7'

services:
  prometheus:
    image: prom/prometheus
    container_name: prometheus
    user: "1000:1000"
    ports:
      - 9090:9090
    volumes:
      - /sharedfolders/config/prometheus.yml:/etc/prometheus/prometheus.yml 
      - type: bind
        source: /sharedfolders/prometheus
        target: /prometheus
    restart: unless-stopped
    command: --web.enable-admin-api --storage.tsdb.retention.time=720d --storage.tsdb.retention.size=30GB --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus
    networks:
      - default