How to add second node

Hello,

Can you help me please with adding second node? I am struggling with a correct declaration which will make sure that I use all unique ports for the second node.

Right now, I am using the following:

docker run --rm -e SETUP="true" \
    --user $(id -u):$(id -g) \
    --mount type=bind,source="/mnt/WD-WCC7K3NHF837/identity",destination=/app/identity \
    --mount type=bind,source="/mnt/WD-WCC7K3NHF837",destination=/app/config \
    --name WD-WCC7K3NHF837 storjlabs/storagenode:latest

This will create stub config in my designated folder in /mnt.

Then, I launch it:

docker run -d \
  --restart unless-stopped \
  --stop-timeout 300 \
  --memory="4g" \
  -p "28968:28968/tcp" \
  -p "28968:28968/udp" \
  -p "192.168.1.182:14003:14003" \
  -e CONSOLE_ADDRESS="192.168.1.182:14003" \
  -e SERVER_PRIVATE_ADDRESS="127.0.0.1:7779" \
  -e WALLET="mywallet" \
  -e EMAIL="myemail" \
  -e ADDRESS="mydynamicdns:28968" \
  -e STORAGE="3697GB" \
  --user "1000:1000" \
  --mount type=bind,source="/mnt/WD-WCC7K3NHF837/identity",destination=/app/identity \
  --mount type=bind,source="/mnt/WD-WCC7K3NHF837",destination=/app/config \
  --name "WD-WCC7K3NHF837" \
  storjlabs/storagenode:latest

I added all new ports, 28968, 14003. Second node starts correctly and it is running in the background. But problem is, I cannot see anything on 14003 port. Tried 127.0.01:14003, 192.168.1.182:14003 (my PC IP), from inside and outside of that PC.

NetworkSettings inspection show some problems. This is my first node, which has been running nicely for weeks:

$ docker inspect c3522791a8b4 --format='{{json .NetworkSettings.Ports}}' | jq{
  "14002/tcp": [
    {
      "HostIp": "192.168.1.182",
      "HostPort": "14002"
    }
  ],
  "28967/tcp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "28967"
    },
    {
      "HostIp": "::",
      "HostPort": "28967"
    }
  ],
  "28967/udp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "28967"
    },
    {
      "HostIp": "::",
      "HostPort": "28967"
    }
  ]
}

And this is the second node, spawned using command above:

$ docker inspect b5029286a2c0 --format='{{json .NetworkSettings.Ports}}' | jq
{
  "14002/tcp": null,
  "14003/tcp": [
    {
      "HostIp": "192.168.1.182",
      "HostPort": "14003"
    }
  ],
  "28967/tcp": null,
  "28968/tcp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "28968"
    },
    {
      "HostIp": "::",
      "HostPort": "28968"
    }
  ],
  "28968/udp": [
    {
      "HostIp": "0.0.0.0",
      "HostPort": "28968"
    },
    {
      "HostIp": "::",
      "HostPort": "28968"
    }
  ]
}

I don’t know why these null ports. And why no output on 14003 port, despite node running:

Storage Node Dashboard ( Node Version: v1.114.6 )

======================

ID     xxxxx
Status ONLINE
Uptime 12m50s

                   Available          Used      Egress       Ingress
     Bandwidth           N/A     183.07 MB     2.85 MB     180.21 MB (since Oct 1)
          Disk       3.70 TB     159.71 MB
Internal 127.0.0.1:7778
External my_dynamic_DNS_address:28968

My last questions, please.

  1. Wallet and e-mail being the same for all nodes, this is allowed, right?
  2. How much space declare for storj to use, in GB? How many GB to leave free? I have 4TB disks, which have 3724 GiB formatted, and 3698 GiB when completely free. Currently I told storj to use -e STORAGE="3697GB" \ but it may be too much?

Thank you!

Just noticed a problem.
docker exec -it WD-WCC7K3NHF837 /app/dashboard.sh
Returns:

ID     xxxxx
Status ONLINE
Uptime 3m6s

                   Available          Used      Egress       Ingress
     Bandwidth           N/A     105.48 MB     1.26 MB     104.22 MB (since Oct 1)
          Disk       3.70 TB     565.61 MB
Internal 127.0.0.1:7778
External mydynamicdns:28968

why 7778? This belongs to first node. It should be 7779? Because: -e SERVER_PRIVATE_ADDRESS="127.0.0.1:7779" \. Unless this is docker internal communication and will not interfere with another node?

You have several issues in your docker run command:

will not work, because if you want to use environment variables to change options, you need to use a prefix STORJ_, or you need to provide them as a command line arguments after the image name, e.g.

Please note, I removed the IP from the console address above, because it’s not available inside the container, unless you use a --network host (but you are not).
You also do not need to change a private address unless you use a --network host option. You also shouldn’t change the container’s ports here:

because it should not work, unless you also changed the server.address option to :28968 instead of default :28967 in your config.yaml (because I do not see that you added this option neither as a env variable nor as a command line argument).

because it has not changed (you used a wrong variable name, see above).
But if you would change it (this is not needed if you do not use a --network host option), then you would be forced to use this private address for almost any command for that node, e.g.

docker exec -it storagenode ./dashboard.sh --server 127.0.0.1:7779
docker exec -it storagenode /app/bin/storagenode exit-satellite --config-dir config --server.private-address 127.0.0.1:7779

and so on.

The correct command should be:

so, you would change only the left part of the port mapping, because the right part is for the port inside the container, and without unneeded modifications of config.yaml you do not need to change it.

You must use the same wallet address for all your nodes (ToS requirement) and may use the same email address for your nodes. The email address is used for notifications about online/offline, suspended/un-suspendend and about disqualifications.

You may use the result of the df -h and specify it as TB/GB/… in STORAGE, this would result to roughly of 10% recommended reserve (because df -h would report in a binary measure units TiB/GiB/…, but Storj software uses a SI (decimal) measure units TB/GB/…).
However, if you like a risk, you may specify 100% of free space. The node has an emergency threshold of 5GB of free space either on the disk or in the allocation and would notify the satellites that it’s full. But if we introduce a bug and this emergency measure doesn’t work, your node may stop working due to lack of free space.
For me your

is pretty safe, because your disk has actually

so, there would be a reserve about 273GB:
(3698 * 1024 * 1024 * 1024 - 3697 * 1000 * 1000 * 1000) / 1e9 = 273.697265152

1 Like

Why all this stuff?
I die you my line for comparison, than you probably will see the mistakes:

docker run -d --restart unless-stopped --stop-timeout 300 \
                -p 28968:28967/tcp \
                -p 28968:28967/udp \
                -p 14003:14002 \
                -e WALLET="$sWallet" \
                -e EMAIL="$sEmail" \
                -e ADDRESS="$sAddress" \
                -e STORAGE="$sSize" \
                --user $(id -u):$(id -g) \
                --mount type=bind,source="$sIDFolder",destination=/app/identity \
                --mount type=bind,source="$sNodeMnt",destination=/app/config \
                --mount type=bind,source="$sDBFolder",destination=/app/dbs \
                --log-opt max-size=20m --log-opt max-file=3 \
                --name "storagenode" storjlabs/storagenode:latest \
                --storage2.monitor.minimum-disk-space="1GiB" \
                --filestore.write-buffer-size="2MiB" \
                --storage2.min-upload-speed=16kB \
                --storage2.min-upload-speed-grace-duration=10s \
                --storage2.database-dir="dbs"
1 Like

This stuff may be required, if they would use the --network host option, because all ports from the container will be exposed to the host, thus they must change every listening port in the node’s configuration (either as a command line option, or as an environment variable or as an option in the config.yaml file), see

However, they also need to use a correct notation for the variable names. These ones are incorrect, so have no effect.

If they wouldn’t use the --network host, then this is not needed and they may use your command as a template.

I would also recommend to install the compose plugin for docker and use the docker-compose.yaml file instead. See example there:

This would allow to update and start not running nodes as simple as:

docker compose up --pull -d

You look, but you don’t see …

14003:14003 > 14003:14002

Mine doesn’t use network host too, otherwise those port-remaps don’t work.

Besides, what’s the point of including also an IP-address?

Furthermore, those variables are of no use and should be left out.

Thank you both, your messages are invaluable for me. If you ask me why I used such and such value, then I admit, mostly I don’t know what I am doing, learning everything as I go.
I have now modified my starting script to do the following:
Example of second node:

docker run -d \
--restart unless-stopped \
--stop-timeout 300 \
--memory="4g" \
-p "28968:28968/tcp" \
-p "28968:28968/udp" \
-p "192.168.1.182:14003:14002" \
-e STORJ_SERVER_ADDRESS=":28968" \
-e STORJ_CONSOLE_ADDRESS=":14003" \
-e WALLET="x" \
-e EMAIL="x" \
-e ADDRESS="x" \
-e STORAGE="3867GB" \
--user "1000:1000" \
--mount type=bind,source="/mnt/WD-WCC7K3NHF837/identity",destination=/app/identity \
--mount type=bind,source="/mnt/WD-WCC7K3NHF837",destination=/app/config \
--name "WD-WCC7K3NHF837" storjlabs/storagenode:latest

Every instance of 28968 and 14003 numbers will be increased for subsequent nodes.
With this setup, I have working dashboard.sh and webpage at 14003 port, great!

Any other variables need prefix STORJ_?

STORAGE has been increased accordingly to your instructions @Alexey:

df
Filesystem       1K-blocks       Used  Available Use% Mounted on
/dev/sde1       3905108984   27259964 3877849020   1% /mnt/x

3877849020 1k blocks, that’s 3877.84902 GB. Therefore, giving 3867GB to the storj. Extra Buffer of 10 GB. Does this sound healthy?

@JWvdV, you are using:

-e STORAGE="$sSize" \
--storage2.monitor.minimum-disk-space="1GiB" \

What is $sSize, do I need to declare it and pass to the docker? Or it means, entire space will be used automatically?

What about those:

--filestore.write-buffer-size="2MiB" \
--storage2.min-upload-speed=16kB \
--storage2.min-upload-speed-grace-duration=10s \

How does it improve handling of storj?

Thank you again, guys.

In the next post, I will share my starting script, you are welcome to give any feedback.

Just leave this out, why do you include it?

1 Like

Get rid of this too…

1 Like

If I don’t include 192.168.1.182, what will it default to? to 127.0.0.1? Then it means it may be not accessible in my LAN.

If I don’t declare 14003 port, will webpage work at all? :sweat_smile:

No, it’s being calculated on beforehand on the script. I actually use 93% of the space.

sSize="$( df --output=size "$sNodeMnt" | awk 'NR==2 { printf ( "%u", ( 0.93 * $1 / 1000000 + 0.5 ) ) }' )GB"

90% used to be the recommended allocation.

Lately there has been added an option to use all space of a disk. But I actually don’t know how to configure it, haven’t had time to look into it.

1 Like

Remove both STORJ_ variables. It still works! Thank you.

Second node command:

docker run -d --restart unless-stopped \
--stop-timeout 300 \
--memory="4g" \
-p "28968:28968/tcp" \
-p "28968:28968/udp" \
-p "192.168.1.182:14003:14002" \
-e WALLET="x" \
-e EMAIL="x" \
-e ADDRESS="x:28968" \
-e STORAGE="3867GB" --user "1000:1000" \
--mount type=bind,source="/mnt/WD-WCC7K3NHF837/identity",destination=/app/identity \
--mount type=bind,source="/mnt/WD-WCC7K3NHF837",destination=/app/config \
--name "WD-WCC7K3NHF837" storjlabs/storagenode:latest

It works well.

Now let me try removing 192.168.1.182 IP from there and see if it’s still accessible in LAN, from another computer.

0.0.0.0, meaning all routes. But since you probably not connected to two networks, this is most probably the same. But you won’t get in trouble when for one or another reason the assigned IP-address just once is different.

Sorry to say, but I really doubt whether you ever read any manual including the one on STORJ-docs. For as far I remember no manual states this to be necessary.

Thanks! In that case I prefer it to stay declared locally, not all routes. These pages being accessible through LAN IP is totally fine for my use case.

I’ve read extensively how to set up a node but never get to documentation where multiple nodes are explained. If there’s such a document, it’s totally my fault.

I have some SMR disks, performing better when you write the piece at once. Since most pieces are below 2M this is happening this way. Besides, there is no unnecessary disk wear when the piece upload is being cancelled for any reason.

Furthermore, since some of these nodes have restricted concurrent uploads (managed by config.yaml) I also restrict slow uploaders.

2 Likes

Here’s the script I am using to manage all nodes. It’s in crontab, starting all nodes after reboot. Can be called during normal operation, it will stop and then restart nodes.

#!/usr/bin/env python3

import os
import subprocess

# Define the common parameters
WALLET = "x"
EMAIL = "x"
ADDRESS = "x"

# Free space of 4TB empty drive:
# df
# Available 3877849020 1k blocks = 3877 GB

# 	node name			mount point					port   port   capacity
NODES = {
    "PK2334PBJ8WX9T":	("/mnt/PK2334PBJ8WX9T",		28967, 14002, "3867GB"),
    "WD-WCC7K3NHF837":  ("/mnt/WD-WCC7K3NHF837",    28968, 14003, "3867GB"),
    #"S301AT7P":			("/mnt/S301AT7P",			28968, 14003, "3697GB"),
    #"S301AT9B":			("/mnt/S301AT9B",			28969, 14004, "3697GB"),
    #"S30198AE":			("/mnt/S30198AE",			28970, 14005, "3697G"),
    #"S30198MA":			("/mnt/S30198MA",			28971, 14006, "3697GB"),
}

# Get the IP address of the eth0 interface
def get_ip_address(interface='eth0'):
    try:
        ip_info = subprocess.check_output(f'ip addr show {interface}', shell=True).decode()
        for line in ip_info.splitlines():
            if 'inet ' in line:
                return line.split()[1].split('/')[0]
    except subprocess.CalledProcessError:
        return None

IP_ADDRESS = get_ip_address()

# Get UID and GID
UID = subprocess.check_output('id -u', shell=True).decode().strip()
GID = subprocess.check_output('id -g', shell=True).decode().strip()

# Loop through each node and start the Docker container
for NODE_NAME, (MOUNT_LOCATION, NODE_DATA_PORT, NODE_WEB_PORT, STORAGE_CAPACITY) in NODES.items():
    ca_key_path = os.path.join(MOUNT_LOCATION, "identity/ca.key")
    
    if not os.path.isfile(ca_key_path):
        print(f"Error: ca.key file not found in {MOUNT_LOCATION}/identity. Skipping {NODE_NAME}.")
        continue

    # Attempt to remove the container if it exists
    try:
        existing_container = subprocess.check_output(f'docker ps -aq -f name={NODE_NAME}', shell=True).decode().strip()
        
        if existing_container:
            # Check if the container is running
            running_container = subprocess.check_output(f'docker ps -q -f name={NODE_NAME}', shell=True).decode().strip()
            if running_container:
                print(f"Stopping the running container: {NODE_NAME}")
                subprocess.run(f'docker stop -t 300 {NODE_NAME}', shell=True)

            print(f"Removing the container: {NODE_NAME}")
            subprocess.run(f'docker rm {NODE_NAME}', shell=True)
    except subprocess.CalledProcessError:
        pass  # If the container doesn't exist, ignore the error

    # Run the Docker container
    print(f"Starting the Docker container for {NODE_NAME}...")
    command = (
        f'docker run -d --restart unless-stopped --stop-timeout 300 --memory="4g" '
        f'-p "{NODE_DATA_PORT}:{NODE_DATA_PORT}/tcp" '
        f'-p "{NODE_DATA_PORT}:{NODE_DATA_PORT}/udp" '
        f'-p "{IP_ADDRESS}:{NODE_WEB_PORT}:14002" '
        f'-e WALLET="{WALLET}" '
        f'-e EMAIL="{EMAIL}" '
        f'-e ADDRESS="{ADDRESS}:{NODE_DATA_PORT}" '
        f'-e STORAGE="{STORAGE_CAPACITY}" '
        f'--user "{UID}:{GID}" '
        f'--mount type=bind,source="{MOUNT_LOCATION}/identity",destination=/app/identity '
        f'--mount type=bind,source="{MOUNT_LOCATION}",destination=/app/config '
        f'--name "{NODE_NAME}" storjlabs/storagenode:latest'
	)
	# Print the constructed command
    print(f"Running command: {command}")
    
    # Execute the command
    subprocess.run(command, shell=True)

Essentially, it’s doing the following, for each node declared:

docker stop -t 300 (nodename)
docker rm (nodename)
docker run -d (all parameters)

It’s doing that in a loop. Checks are in place to see if identity/ca.key file is present, meaning the volume is correctly mounted. If will skip a node if any problem is found. My mounting points are serial numbers of the drives. For example /mnt/WD-WCC7K3NHF837 that’s my Western Digital drive, if it doesn’t mount or work, I will instantly know which drive is it.
You are welcome to scrutinize. :sweat_smile:

Yes I’ve seen it too in the posts, saved it somewhere, didn’t get to it yet. Maybe I should :smiley:

I rest my case, since you don’t seem to understand it. There is no difference between having it with or without that IP-address, unless it wasn’t IPv4 or you were in multiple LANs.

I may understand this wrong, please bear with me. But I have multiple virtual networks in this computer, through libvirt, so yeah, declaring 192.168.1.x will prevent this address being accessible anywhere else outside of my normal LAN.

Might be, but there is also docker documentation. Besides, is really easier to help you if there is some basic knowledge on networking.

Since you’re using daemon docker with restart unless stopped, it will also restart after reboot. So you also need to stop the running instances.

I’m just wondering, why not a shell script?