Having an issue with port forwarding (YAML related?)

I was wondering if anyone could help me…or at least point me in the correct direction.
We are working toward becoming a storage node operator x 12 nodes!

I have one ProxMox Hypervisor running 12 instances (headless) of Ubuntu 22.04 with Docker.
Each node has 5TB of static mounted storage (fstab)…which leaves 4TB usable per node for a grand total of 48 TB of usable ZRAID2 Storage.

I am having some trouble getting the correct YAML settings:

Each of my nodes has a internal IP:
10.xxx.254.101
10.xxx.254.102
10.xxx.254.103
10.xxx.254.104
10.xxx.254.105
etc…

Since We have one static IP, I want each node to advertise on a consecutive port.
10.xxx.254.101:28967—>22101 (this is external facing port on SAME external IP)
10.xxx.254.102:28967—>22102 (this is external facing port on SAME external IP)
10.xxx.254.103:28967—>22103 (this is external facing port on SAME external IP)
10.xxx.254.104:28967—>22104 (this is external facing port on SAME external IP)
10.xxx.254.105:28967—>22105 (this is external facing port on SAME external IP)

I’m not sure how to set up port forwarding for this instance. This my latest try
docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:22101/tcp
-p 28967:22101/udp
-p 14002:14002
-e WALLET=“0xXXXXXXXXXXXXXXXXXXX”
-e EMAIL="xxx@xxxxxxx.com"
-e ADDRESS=“67.133.XXX.XXX:22101”
-e STORAGE=“4TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/home/ladmin/.local/share/storj/identity/storagenode”,destination=/app/identity
–mount type=bind,source=“/mnt/jank/store”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

When I try to verify things… I get an error.

root@storj01:/home/ladmin# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25428f8fe746 storjlabs/storagenode:latest “/entrypoint” 13 seconds ago Up 11 seconds 0.0.0.0:14002->14002/tcp, :::14002->14002/tcp, 28967/tcp, 0.0.0.0:28967->22101/tcp, 0.0.0.0:28967->22101/udp, :::28967->22101/tcp, :::28967->22101/udp storagenode

root@storj01:/home/ladmin# docker exec -it storagenode /app/dashboard.sh
2023-03-11T20:03:33.349Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2023-03-11T20:03:33.349Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2023-03-11T20:03:33.351Z INFO Identity loaded. {“Process”: “storagenode”, “Node ID”: “1dshfkjsdhlkslkdjflsdkj;llk”}
Error: rpc: dial tcp 127.0.0.1:7778: connect: connection refused

Let me know if anyone can help?
Thank you!

You need to change the first number, not the second.

-p 22101:28967/tcp
-p 22101:28967/udp

1 Like

Also, why do you use 12 VMs? The one docker instance can handle multiple nodes, you just need to have unique ports for node and dashboard.

Also, please do not publish your dashboard to the internet without any protection - anyone will have an access to your dashboard, use this method instead: How to remote access the web dashboard - Storj Node Operator Docs or [Tech Preview] Multinode Dashboard Binaries

Starting multiple new nodes is not advisable: each new node must be vetted. The unvetted node can receive only 5% of the customers uploads until got vetted. To be vetted on one satellite, it should pass 100 audits from it. For the one node in the same /24 subnet of public IPs it should take at least a month (or longer).
We filter out nodes by /24 subnet of public IPs, all nodes behind /24 subnet of public IPs are treated as a one big node for uploads, and as a separate ones for downloads, repair egress traffic, audits and online checks: we want to be decentralized as much as possible.
For multiple vetting nodes in the same /24 subnet of public IPs the vetting could take in the same amount of times longer as an amount of such nodes.

So it’s better to start the next node only when a previous one almost full or at least vetted.

2 Likes

Alexey – Most of my reasoning for doing it this way is that it fits systems we are already using on site. We cannot afford to buy new gear or tear down existing processes for the project. I wish the directions had been more clear up front.

That said, I am still having an issue with my nodes not staying up…Logs tell me docker can’t read the info.db file. Not sure how to adjust permissions in docker.
drwxr-xr-x 2 root root 0 Mar 20 15:06 .
drwxr-xr-x 2 root root 0 Mar 20 15:05 …
drwxr-xr-x 2 root root 0 Mar 20 14:58 blobs
drwxr-xr-x 2 root root 0 Mar 20 14:58 garbage
-rwxr-xr-x 1 root root 0 Mar 20 2023 info.db
-rwxr-xr-x 1 root root 32 Mar 4 18:21 storage-dir-verification
drwxr-xr-x 2 root root 0 Mar 20 14:58 temp
drwxr-xr-x 2 root root 0 Mar 20 14:58 trash

2023-03-20 14:53:47,112 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-03-20 14:53:47,112 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-03-20 14:53:47,112 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Error: Error starting master database on storagenode: database: info opening file “config/storage/info.db” failed: database is locked
storj.io/storj/storagenode/storagenodedb.(*DB).openDatabase:331
storj.io/storj/storagenode/storagenodedb.(*DB).openExistingDatabase:308
storj.io/storj/storagenode/storagenodedb.(*DB).openDatabases:283
storj.io/storj/storagenode/storagenodedb.OpenExisting:250

The database is locked meaning that something is opened the database and it’s busy. I hope you generated an unique identity for each node and did not clone them, otherwise they will be disqualified pretty quick, because for the satellite it’s the same node but with missed pieces of each other clone.
You also should give an unique disk for each node, they cannot be combined.

For the permissions issue: if you finish setup of docker, you can run it without sudo, so if you used --user $(id -u):$(id -g) option in your docker run command, you should change the owner for the data location and identity location to your user, i.e.

sudo chown $(id -u):$(id -g) -R /mnt/storj/storagenode1

where is /mnt/storj/storagenode1 is a data (and preferably - identity) location for your node.
Please show your docker run command (you may mask the private information).
Please also note, that storagenode is incompatible with any network filesystems like SMB/NFS/SSHFS/etc., the only working network protocol is iSCSI.
Since you use VM, it would be probably easier to create a virtual disk for each node and attach them to the each VM.

Alexey,
I did generate 12 specific identities – one for each node. That should not be the problem. I am still having the issue where the node reboots constantly. (every 23 seconds or so).
I am using FSTAB (smb/cifs) to statically load my storage. Each vm/node gets a separate share of 5TB (4TB usable) and is located here…/mnt/jank (same mountpoint in each vm/node)

Here is my Docker RUN:
docker run -d --restart unless-stopped --stop-timeout 300
-p 22101:28967/tcp
-p 22101:28967/udp
-p 14002:14002
-e WALLET=“0xEEE”
-e EMAIL="storj@example.com"
-e ADDRESS=“xx.xx.xx.xx:22101”
-e STORAGE=“4TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/home/ladmin/.local/share/storj/identity/storagenode”,destination=/app/identity
–mount type=bind,source=“/mnt/jank/store”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

Thank you!

SMB/CIFS as any other network filesystem (NFS, SSHFS, etc.) are not supported and likely will not work at all or will stop work later for one reason or another, the only worked storage network protocol is iSCSI.

So you should either connect your drives directly to the VM, or use a virtual disks, or use iSCSI.

@Alexey
I setup my NAS nodes before -user $(id -u):$(id -g). Can I use the “chown” now too, to get rid of sudo su?

Depends on NAS. For example docker on Synology doesn’t support root-less usage.

You may try to add your user to the docker group and re-login, then try to execute docker ps without sudo. If it would work, you can change an owner to $(id -u):$(id -g) and use --user $(id -u):$(id -g) option.

@Alexey
Yes, it’s Synology NAS. I have only one user, wich is also the admin, but I don’t see Docker listed in App Permisions. I think they alow setting user permisions only for the official apps, not for third parties.
I don’t know if I must run some command in terminal to alow Docker as user, but this may be overwritten by DSM update. I tryed docker ps -a without sudo su and I got this:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?all=1": dial unix /var/run/docker.sock: connect: permission denied

So I’m stuck with the sudo su docker.

Yep. In this case you should not use --user $(id -u):$(id -g) option in your docker run command.