New node, internet-connected via vpn, same machine, new hdd > setup questions

I want to setup a second node. Setup would be:

  • MacOS M1 with docker desktop; finally seems to work stable.
  • 1 TB CMR HDD ready to run, directly connected
  • VPN server available, OpenVPN *.ovpn, too
  • DDNS service and url available, verified with a VPN L2TP server running on the same NAS:
  • NAS, where VPN server is located, acts as a new internet access point
  • Port foreseen and enabled on the VPN server host’s router and enabled on the NAS, too: 5xxx tcp+udp > instead of the storj standard port 28967, 5xxx will be used. router configuration: tcp+udp: 5xxx-5xxx > 5xxx-5xxx
  • OpenVPN client to be run within a “slim” ubuntu instance in a docker container
  • Storagenode#2 will be linked to the VPN-container’s network

Sounds a bit crazy, but except a used HDD, everything else is available without any extra cost. :wink:

Before setting up the OpenVPN client within the ubuntu docker container, I need to find out the right run command for ubuntu (incl. correct mount(s) + port to be opened). I’ve received the following run command from a friend, but he’s currently offline for a while and cannot help:

docker run -d -it \
-p 28967:28967 \
--mount type=bind,source="/Users/bivvo/vpn",destination="/mnt/vpn" \
--privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 \
--name=ubuntu ubuntu

… where source = folder, where the *.ovpn file is located and the port is, in this example, the storj standard port.

MY QUESTION #1:
→ shouldn’t be used the 5xxx:5xxx port here in the ubuntu run command instead of 28967:28967? (confused)

For the 2nd docker node, which should be linked to the vpn connection of the ubuntu container, the run command could look like this:

docker run -d --restart unless-stopped --stop-timeout 300 \ 
-p 5xxx:28967/tcp \
-p 5xxx:28967/udp \
-p 6xxx:14002 \
-e WALLET="" \ 
-e EMAIL="abc@mail.com" \ 
-e ADDRESS="ddns:5xxx" \ 
-e STORAGE="0.9TB" \ 
--log-opt max-size=100m \
--log-opt max-file=3 \
--network=container:ubuntu \ 
--privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 \ 
--mount type=bind,source="/mnt/disk/storj/identity/storagenode",destination=/app/identity \ 
--mount type=bind,source="/mnt/disk/storj",destination=/app/config \ 
--mount type=bind,source="/mnt/ssd/storj",destination=/app/dbs \ 
--name storagenode storjlabs/storagenode:latest \ 
--operator.wallet-features=zksync

… where the source values need to be modified for sure and DDNS will have the full DDNS address + the new port 5xxx. The dashboard needs to be accessible from “outside” via DDNS-URL:6xxx, as my monitoring script needs access to the JSON behind.

MY QUESTION #2:
→ as soon as question #1 is solved and the vpn / network is running fine: does that look correct for you or do I miss something?

If there is someone who is willing to help quickly via discord, e.g., please dm me.

only if you change the port/ports in the config.yaml
i think its an internal port in the local docker network.
docker has its own little virtual network for its containers, which has all sort of internal IP’s and ports, but not 100% sure.

which is why it looks like that

-p 5xxx:28967/tcp \

personally i don’t change the config.yaml if i can avoid it, everything can be done via the docker run command anyways… so it just adds more confusion and troubleshooting.
but some people use it…

my docker run looks something like this, the filestore /piece.write makes it so that files are written in full rather than smaller pieces, to reduce io and to avoid storj files causing file fragmentation

docker run -d -e RUN_PARAMS="–filestore.write-buffer-size 4096kiB
–pieces.write-prealloc-size 4096kiB" -d --restart unless-stopped
–stop-timeout 300 -p 192.168.1.100:28967:28967/tcp -p 192.168.1.100:28967:28967/udp
#–log-opt max-size=1m
-p 192.168.1.100:14002:14002 -e WALLET=“0x111111111111111111111”
-e EMAIL="your@email.com" -e ADDRESS=“global.ip.inet:28967”
-e STORAGE=“4TB” --mount type=bind,source="/sn3/id-sn3",destination=/app/identity
–mount type=bind,source="/sn3/storj",destination=/app/config --name sn3 storjlabs/storagenode:latest

1 Like

Thank you @SGC. When I understand correctly, I should try to use the standard port of storj within docker. Having that in mind, I think the correct usage would be:

docker run -d -it \
-p 5xxx:28967 \
--mount type=bind,source="/Users/bivvo/vpn",destination="/mnt/vpn" \
--privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 \
--name=ubuntu ubuntu

and:

docker run -d --restart unless-stopped --stop-timeout 300 \ 
-p 28967:28967/tcp \
-p 28967:28967/udp \
-p 14002:14002 \
-e WALLET="" \ 
-e EMAIL="abc@mail.com" \ 
-e ADDRESS="ddns:5xxx" \ 
-e STORAGE="0.9TB" \ 
--log-opt max-size=100m \
--log-opt max-file=3 \
--network=container:ubuntu \ 
--privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 \ 
--mount type=bind,source="/mnt/disk/storj/identity/storagenode",destination=/app/identity \ 
--mount type=bind,source="/mnt/disk/storj",destination=/app/config \ 
--mount type=bind,source="/mnt/ssd/storj",destination=/app/dbs \ 
--name storagenode storjlabs/storagenode:latest \ 
--operator.wallet-features=zksync

What does it mean for the dashboard, which I want to expose to the public? Not sure what to configure in the 2nd run command here. Should it be something like public.ddns.com:6xxx:14002?

What do the following parameters do?

RUN_PARAMS="–filestore.write-buffer-size 4096kiB –pieces.write-prealloc-size 4096kiB"

these make it so the storagenode writes files in full rather than in fragmented little pieces.

i don’t think you can change it to an ip that doesn’t exist on the local system/storagenode host, you will set it up normally and then route the port to the lan storagenode ip in your router.

basically the dashboard is accessible on lan and then you allow the internet that same access… i cannot recommend that tho… it’s not a very safe practice.

is that advisable in general?
can this be added as a parameter for an existing node?

yeah, while reading your comment, I recognise, that the monitoring script is running on the same machine - so no need to expose it to the internet. :wink:

1 Like

so far i haven’t had any problems with it, nor am i aware of any others having problems with using it, but it is ofc a fairly recent thing some of the SNO’s started doing, because it was discovered that, storagenode operations could scatter a single file so it would be in like 9 different places, increasing read latency.

basic fragmentation of files… so to avoid that, somebody came up with a version of that for the binary and i got somebody to help me parse it so it would work in the docker run command.
because i like to keep my setup there and the rest of the stuff default.

yes it if memory serves its actually a feature of the node software itself, which is why it looks a bit odd in the docker run command, if your system is stable i would certainly use it…

but i’ve had my fair share of crashes with it on without seeing any detrimental effects, but i run ZFS, with SLOG using enterprise PLP SSD’s… so i would be less prone to damage.

but so far it seems to do its job, it makes sure a file isn’t written in small pieces rather than one big.

Update

I got the vpn up and running in the ubuntu container. 2 questions, hope you or someone can help:

  1. openvpn is blocking my command line in the container cli. how to open openvpn in background?
  2. how to automatically connect and reconnect, once the connection is offline or there is a docker restart?

cc @SGC

you could use screen.
apt install screen
screen -S examplename
screen -r examplename
and
screen -ls
and or screen -d to force deattach a screen that is still attached but you need
oh yeah and ctrl a then d to exit a screen without closing it

i’m sure you will figure it out, i just basically listed the commands i use with screen… not sure one needs all the other stuff lol…

–restart unless-stopped

that should keep it attempting to reconnect… basically forever but it does seem to slow down after a while… but usually it does reconnect by itself… it just takes like 30min to 1hour before it might do it by itself…
i duno… seems to work… but there are a ton of things that can cause network to not start working again, immediately and even some stuff that will keep it from ever starting back up by itself… before the problems are solved…

but that part of the docker run command which you are already using should keep restarting the docker “instance” / storagenode until the connection is restored.

for the container? if the container is “rebooting”, I don’t think openvpn connection is automatically enabled, just ubuntu will be started.

another thought and question: if the node is tunnelled via vpn to another internet access point, I would assume, that having the same port 28967 for both container nodes on the same machine are not causing trouble. hmm?

same for the dashboard: 14002 + localhost access → not sure if that will work and cause trouble on the same machine. currently I am able to call localhost:14002 to open the dashboard for the first node. not sure what happens, when I run the second run command to launch the second node the first time. I want to avoid “crashing” the first node. Should be prevented with --network=container:ubuntu usually - would be my expectation.

–restart unless-stopped
will keep restarting the docker storagenode container until a connection is established.

you can add it as a cron job, i think… cron -e
then you add like @reboot /example/openvpn.sh

or something like that.
you can find an example and further information here.

you will want to use unique ports for each node running on the same machine/lan ip.

like what you did here… you can only run one storagenode on either port.
then the port would be 5xxx for that node and you could run the other node as default or whatever.

its an address, and each node needs its own address to avoid confusion.
the same ip can be used, but then ports need to be different for each node.

the 28967 part of the -p 5xxx:28967/tcp \ doesn’t need to be changed because that is internal to docker, so that you can basically ignore… the first part is what port on the lan network you would access it from… like say the example of the dashboard at 6xxx … you would access that by the ip:6xxx

if you changed it to 6xxa then the dashboard for that node would be on ip:6xxa
ofc using port numbers instead of letters :smiley:

not sure i explained that very well, but should make sense.

everything else aside from that would be the routing.

1 Like

thank you! for sure, it very much helps me to avoid mistakes.

I’ll try like this: forwarding ports 5xxx + 6xxx through the vpn and ubuntu docker container to the storagenode container and run the node command with 5xxx:28967 and 6xxx:14002 parameters. My feeling is, that should work.

while writing this: which port should the vpn run command should have? hmm… 5xxx (6xxx missing…?) @SGC

Meanwhile I recognised, that the vpn has no “pure” internet access, but seems to be limited to the ports mentioned. So I am not able to run e.g. apt update within the ubuntu container. :smiley: I’ll need to change that this evening and then I’ll try to start the new node as described. crossing fingers :metal:t2:

well 6xxx would be your dashboard for that node… so you wouldn’t want that accessible from the internet i don’t think… so 5xxx i guess…
i’m not familiar with your vpn so cannot really say much about that… but 5xxx tcp and udp is what the storagenode will use to communicate with the internet / lan

1 Like

This is a hack, not a solution. Docker allows you to run your container in the background: Docker run reference | Docker Documentation. Instead of using flags -it, you should provide after the image name a command to run when your container starts:

docker run -d <YOUR_FLAGS> ubuntu <COMMAND_TO_START_OPENVPN>

About the port forwarding, your architecture explanations are quite confuse. It would be easier to help you with a schema of what you want to achieve.

2 Likes

basically:

  • i want to run a second node on the same machine, without the hassle to share the traffic
  • therefore i want to route the internet access of that 2nd node through a vpn to another internet access point

my solution currently looks like:

  • a new docker container running ubuntu
  • within ubuntu, connect to the external vpn
  • a second new docker container running the second node
  • … where network is routed through the vpn container

of course I want to avoid issues braking my existing node, running with docker one the same machine.

I have described my internet connection issue on the “vpn container” here.

Meanwhile I’ve tested the VPN on my iPhone with the OpenVPN app > and it works well. What is strange is, that the “Ubuntu VPN container” has internet access, when the VPN is disconnected, but has no access, when it is connected. That’s weird. My current run command of the VPN container:

docker run -d -it \
-p 5xxx:28968/tcp \
-p 5xxx:28968/udp \
--mount type=bind,source="/Users/bivvo/vpn",destination="/mnt/vpn" \
--privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 \
--name=ubuntu ubuntu

it’s done : second node is up and running:

  • on the same machine within docker
  • tunnelled via vpn through a second ubuntu container

thank you @SGC @Carlotronics @TheMightyGreek

1 Like

It’s much better to use systemd …

something like this:

# /etc/systemd/system/storj-vnc.service
[Unit]
Description=
ConditionPathExists=|/usr/bin
After=network.target

[Service]
User=storj
ExecStart=openvnc command

# Restart every >2 seconds to avoid StartLimitInterval failure
RestartSec=3
Restart=always

[Install]
WantedBy=multi-user.target

And then test it:

systemctl enable storj-vnc
systemctl start storj-vnc

Using systemd ensures that the network is running before starting the vnc connection… You can also configure the service to check for docker and the storj container’s status.

2 Likes

thx. how?

I’ve put the DDNS ping to crontab, as I found it too complicated with systemd :wink:

i don’t think one can use systemd if one boots over zfs…
anyways i’ve had problems with that in the past when trying to do certain configurations.
think i had to use grub instead.
took me so long to figure out because everybody was always just assuming it was using systemd

1 Like