Docker on Diskstation NAS - proper way to run storagenode

I’ve seen a new parameter in docs [–user $(id -u):$(id -g)] and, as I’m not a linux user, I wonder what is it’s meaning and what is the proper and most secure way to run a storagenode on Synology Diskstation? I use the NAS only for storagenode and I intend to install a few more machines in different locations.
My way to install and run a docker storagenode, untill now, was:
-make an Admin account with a name… “xjohnx”, in Admin group;
-deactivate the implicite admin and guest accounts;
-logged in as xjohnx, install Docker and acctivate SSH;
-with Putty, from an Windows PC, log in as xjohnx;
-sudo su or sudo -i to root;
-run the storagenode commands for docker, without the [–user $(id -u):$(id -g)].
-exit exit and relax… my node is up.
-all the stop, run, rm storagenode commands I run them with Putty from PC.

  1. Now, I wonder, what the –user parameter brings new, and what should I change in my method?
  2. On new machines, can I make a new user in Users group, with limited rights, log in with Putty with that user and run the docker commands without sudo su or sudo -i?
  3. The Terminal window in Docker app, in DSM, does the same things as Putty?

All my Docker commands in Putty as the admin “xjohnx”:

sudo su
echo "net.core.rmem_max=2500000" >> /etc/sysctl.conf
sysctl -w net.core.rmem_max=2500000
docker pull storjlabs/storagenode:latest
docker run --rm -e SETUP="true" \
    --mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
    --mount type=bind,source="/volume1/Storj/",destination=/app/config \
    --name storagenode storjlabs/storagenode:latest
docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 28967:28967/tcp \
    -p 28967:28967/udp \
    -p 14002:14002 \
    -p 5999:5999 \
    -e WALLET="xxxxxxxxxxxxxx" \
    -e EMAIL="xxxxxxxxxxxxxxx" \
    -e ADDRESS="xxxxxxxxxxxx:28967" \
    -e STORAGE="xxTB" \
    --mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
    --mount type=bind,source="/volume1/Storj/",destination=/app/config \
    --name storagenode storjlabs/storagenode:latest \
    --operator.wallet-features=zksync \
    --log.level=error \
    --debug.addr=":5999"
docker pull storjlabs/watchtower
docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode watchtower --stop-timeout 300s

To run a second node on the same NAS, on a second HDD, on the same IP, no DDNS:
-where should I put the new name storagenode2 in these commands?
-what ports should I change?
-should I run a second watchtower?
-what I must change in this commands?
-if I use the –user parameter, how will I run these? As sudo -i? or logged in as the user, and no sudo -i?

docker pull storjlabs/watchtower
docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode watchtower --stop-timeout 300s

You have to change the left side of the ports, the volumes and the --name parameter

I think it’s good to have a separate watchtower for each node as they then can update themselves independently of each other.

A single watchtower can also update multiple nodes, just add it with space i.e.
... storagelabs/watchtower storagenode storagenode2 watchtower ....
Remember to stop the curently running watchtower if you want to do that.

It shouldn’t matter much if the nodes are on different hard drives, and now only the base image is updated, the node version update happens in the container after startup and follows a continuous update procedure (they update in waves and the update order is linked to the NodeID).

It’s to improve security, it transfer your user-id and group-id to the container, so such a container supposed to be run without sudo. To make it happen, your user should be in the docker group and you can execute docker run without sudo.
However, I’m not sure that this work on NAS the same way as in Linux. So, if you should run docker with sudo, then the --user option should not be used.

It didn’t work with keeping the internal port 28967 on the second node, with port forward in router 28968-28967. Maybe it’s a limitation of the router or the NAS/Docker… After many starts and removes of storagenode2, I managed to make it work with these ports:

-p 28968:28968/tcp \
-p 28968:28968/udp \
-p 14003:14002 \
-p 6000:5999 \

In router I added new rule for 28968-28968 TCP and UDP ports.
In config.yaml I changed the 28967 in 28968:

# public address to listen on
server.address: :28968

Now the dashboards of both nodes are looking fine. The only suspicious thig that I see is in the CLI docker ps -a output; the port 28967 is still listed for storagenode2 and I don’t know why and if it will affect somehow my nodes… Can you enlight me? Thanks!

This port (28967) is exposed in the image by default: storj/Dockerfile at 0051298eecda49e279f4b0988dd65c49dc6e143f · storj/storj · GitHub
But since you did not add port mapping for 28967 to actually map it on your host - it will not affect other containers (unless you use --network host).