My docker run commands for multinodes on Synology NAS

These are all the useful parameters that one storagenode can use, and the modifications that I use for the second node. Fill free to use them or not, is up to you. I put them here fore easy reference.
These work for Synology NAS and I run them in sudo su mode.
There is also the --user $(id -u):$(id -g) parameter, but I started without it. It seems that dosen’t work on Synology NAS.

OLD CONFIG:

See UPDATED section for the recent version; also read the entire thread to get the picture.

NODE 1, MACHINE 1:
Run only once at install of first node:

echo "net.core.rmem_max=2500000" >> /etc/sysctl.conf
sysctl -w net.core.rmem_max=2500000
docker pull storjlabs/storagenode:latest
docker run --rm -e SETUP="true" \
	--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj/",destination=/app/config \
	--name storagenode storjlabs/storagenode:latest

Run at installation or after stoping and removing node, after parameters change:

docker run -d --restart unless-stopped --stop-timeout 300 \
	-p 28967:28967/tcp \
	-p 28967:28967/udp \
	-p 14002:14002 \
	-p 5999:5999 \
	-e WALLET="...WALLET..." \
	-e EMAIL="...EMAIL..." \
	-e ADDRESS="...WAN_IP...:28967" \
	-e STORAGE="14TB" \
	--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj/",destination=/app/config \
	--log-opt max-size=10m \
	--log-opt max-file=5 \
	--name storagenode storjlabs/storagenode:latest \
	--server.address=":28967" \
	--console.address=":14002" \
	--debug.addr=":5999" \
	--log.level=error \
	--filestore.write-buffer-size 4MiB \
	--pieces.write-prealloc-size 4MiB \
	--storage2.piece-scan-on-startup=true \
	--operator.wallet-features=zksync

NODE 2, MACHINE 1:
Run only once at install of seconde node:

docker run --rm -e SETUP="true" \
	--mount type=bind,source="/volume2/Storj2/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume2/Storj2/",destination=/app/config \
	--name storagenode2 storjlabs/storagenode:latest

Run at installation or after stoping and removing node, after parameters change:

docker run -d --restart unless-stopped --stop-timeout 300 \
	-p 28968:28968/tcp \
	-p 28968:28968/udp \
	-p 14003:14003 \
	-p 6000:6000 \
	-e WALLET="...WALLET..." \
	-e EMAIL="...EMAIL..." \
	-e ADDRESS="...WAP_IP...:28968" \
	-e STORAGE="14TB" \
	--mount type=bind,source="/volume2/Storj2/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume2/Storj2/",destination=/app/config \
	--log-opt max-size=10m \
	--log-opt max-file=5 \
	--name storagenode2 storjlabs/storagenode:latest \
	--server.address=":28968" \
	--console.address=":14003" \
	--debug.addr=":6000" \
	--log.level=error \
	--filestore.write-buffer-size 4MiB \
	--pieces.write-prealloc-size 4MiB \
	--storage2.piece-scan-on-startup=true \	
	--operator.wallet-features=zksync

Installing the WATCHTOWER - in this form, it will keep logs small and update all the containers, not only Storj’s:

docker pull storjlabs/watchtower
docker run -d --restart=always --log-opt max-size=10m --log-opt max-file=5 --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower --stop-timeout 300s --notifications-level error

Useful commands:

docker ps -a

docker stop -t 300 storagenode

docker rm storagenode

docker start storagenode


docker stop -t 300 storagenode2

docker rm storagenode2

docker start storagenode2


docker stop -t 300 watchtower

docker rm watchtower

docker start watchtower

Other useful commands:

	#check logs:
docker logs storagenode

docker logs watchtower

	#see the last 20 log entries:
docker logs --tail 20 storagenode

	#CLI dashboard:
docker exec -it storagenode /app/dashboard.sh

	#see all commands:
docker exec -it storagenode /app/storagenode help

	#execute commands:
docker exec -it storagenode /app/storagenode <<command>>

====================

UPDATED:

Docker run commands for 2 nodes, 2 drives (Exos), on the same Synology DS220+, DSM 7.x, 18GB RAM, using network host mode, and databases moved to USB flash drive:

Startup scripts in task scheduler (triggered, root, boot-up):

sysctl -w net.core.rmem_max=2500000
sysctl -w net.core.wmem_max=2500000
sysctl -w net.ipv4.tcp_fastopen=3

If databases are moved to USB:

# This startup script works on Synology (triggered, root, boot-up):
mount -o remount,noatime "/volumeUSB1/usbshare"

Pre-Setup - only one time:

sudo su

echo "net.core.rmem_max=2500000" >> /etc/sysctl.conf
sysctl -w net.core.rmem_max=2500000
echo "net.core.wmem_max=2500000" >> /etc/sysctl.conf
sysctl -w net.core.wmem_max=2500000
echo "net.ipv4.tcp_fastopen=3" >> /etc/sysctl.conf
sysctl -w net.ipv4.tcp_fastopen=3

Setup - only one time before you start the node, to setup the directories and etc.

sudo su

docker pull storjlabs/storagenode:latest

docker run --rm -e SETUP="true" \
	--mount type=bind,source="/volume1/Storj1/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj1/",destination=/app/config \
	--name storagenode1 storjlabs/storagenode:latest

docker run --rm -e SETUP="true" \
	--mount type=bind,source="/volume2/Storj2/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume2/Storj2/",destination=/app/config \
	--name storagenode2 storjlabs/storagenode:latest

Node 1:

sudo su

docker run -d --restart unless-stopped \
	--stop-timeout 300 \
	--network host \
	-e WALLET="xxx" \
	-e EMAIL="xxx" \
	-e ADDRESS="xxx:28961" \
	-e STORAGE="xxTB" \
	--mount type=bind,source="/volume1/Storj1/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj1/",destination=/app/config \
	--mount type=bind,source="/volumeUSB1/usbshare/storjdbs1/",destination=/app/dbs \
	--log-driver json-file \
	--log-opt max-size=10m \
	--log-opt max-file=3 \
	--name storagenode1 storjlabs/storagenode:latest \
	--server.address=":28961" \
	--console.address=":14011" \
	--server.private-address="127.0.0.1:14021" \
	--debug.addr=":6001" \
	--storage2.database-dir=dbs \
	--log.level=info \
	--log.custom-level=piecestore=FATAL,collector=WARN \
	--pieces.enable-lazy-filewalker=false \
	--storage2.piece-scan-on-startup=false

Node 2:

sudo su

docker run -d --restart unless-stopped \
	--stop-timeout 300 \
	--network host \
	-e WALLET="xxx" \
	-e EMAIL="xxx" \
	-e ADDRESS="xxx:28962" \
	-e STORAGE="xxTB" \
	--mount type=bind,source="/volume2/Storj2/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume2/Storj2/",destination=/app/config \
	--mount type=bind,source="/volumeUSB1/usbshare/storjdbs2/",destination=/app/dbs \
	--log-driver json-file \
	--log-opt max-size=10m \
	--log-opt max-file=3 \
	--name storagenode2 storjlabs/storagenode:latest \
	--server.address=":28962" \
	--console.address=":14012" \
	--server.private-address="127.0.0.1:14022" \
	--debug.addr=":6002" \
	--storage2.database-dir=dbs \
	--log.level=info \
	--log.custom-level=piecestore=FATAL,collector=WARN \
	--pieces.enable-lazy-filewalker=false \
	--storage2.piece-scan-on-startup=false

Log files:

sudo su
docker logs storagenode1 2>&1
docker logs storagenode2 2>&1
docker logs watchtower 2>&1

docker logs storagenode1 2>&1 | grep "retain"
docker logs storagenode1 2>&1 | grep "pieces:trash"

# SL satellite:
docker logs storagenode1 2>&1 | grep "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"
# AP1 satellite:
docker logs storagenode1 2>&1 | grep "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"
# US1 satellite:
docker logs storagenode1 2>&1 | grep "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"
# EU1 satellite:
docker logs storagenode1 2>&1 | grep "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"

Path to logs:

# Path to log files:
sudo su
docker ps -a	#get instance-id, the beggining of container-id
ls -l			#get dirs and files details

# Synology:
sudo su
cd /volume1/@docker/containers/
/<containerID>/<containerID>-json.log

# Ubuntu:
sudo su
cd /var/lib/docker/containers/
/<containerID>/<containerID>-json.log

Help manuals:

sudo su
docker exec -it storagenode1 ./storagenode setup --help
docker logs --help

Graceful Exit:

sudo su

# NODE 1:
docker exec -it storagenode1 /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14021
docker exec -it storagenode1 /app/storagenode exit-status --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14021

# NODE 2:
docker exec -it storagenode2 /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14022
docker exec -it storagenode2 /app/storagenode exit-status --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14022

Forget satellites:

sudo su

# Forget untrusted or exited satellites:
docker exec -it storagenode1 /app/storagenode forget-satellite \
--force \
12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB \
12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo \
--config-dir /app/config \
--identity-dir /app/identity \
--server.private-address 127.0.0.1:14021

docker exec -it storagenode2 /app/storagenode forget-satellite \
--force \
12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB \
12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo \
--config-dir /app/config \
--identity-dir /app/identity \
--server.private-address 127.0.0.1:14022

# check status:
docker exec -it storagenode1 /app/storagenode forget-satellite-status \
--config-dir /app/config \
--identity-dir /app/identity \
--server.private-address 127.0.0.1:14021

docker exec -it storagenode2 /app/storagenode forget-satellite-status \
--config-dir /app/config \
--identity-dir /app/identity \
--server.private-address 127.0.0.1:14022

# after Success status, wait 2 minutes and restart the node:
docker stop -t 300 storagenode1
docker restart -t 300 storagenode1

docker stop -t 300 storagenode2
docker restart -t 300 storagenode2

If you change parameters in config.yaml, you only need to restart the node.
If you change parameters in run command, you have to recreate the container:

sudo su

docker stop -t 300 storagenode1
docker rm storagenode1
docker run...

docker stop -t 300 storagenode2
docker rm storagenode2
docker run...
4 Likes

At first step, add this lines too:

echo "net.ipv4.tcp_fastopen=3" >> /etc/sysctl.conf
sysctl -w net.ipv4.tcp_fastopen=3

The --sysctl net.ipv4.tcp_fastopen=3 flag in docker run dosen’t seem to work for Synology, for now. There is no tcp_fastopen file in container, but run those first lines for kernel setup, just to be sure you did everything you can. You will get a “state unknown” for tcp fastopen in logs (info mode).
You can check tcp fastopen state with these:

cat /proc/sys/net/ipv4/tcp_fastopen
docker exec -it storagenode cat /proc/sys/net/ipv4/tcp_fastopen

Also, in Synology DSM Task Scheduler, you must set these up, to run as root at every boot:

sysctl -w net.core.rmem_max=2500000
sysctl -w net.ipv4.tcp_fastopen=3

After all these mods, stop node, rm node, restart Synology, docker run node.

If you plan to run storagenodes in Docker on Synology DiskStations, only + models (plus models) support Docker. These are the basic settings that I can recommend for Synology machines, runing only storagenodes:

Storage settings:

RAID type: Basic
Filesystem: ext4
Record file access time: Never
Low capacity notification: 5%
Data Scrubbing schedule: Enable only for RAID with 2 or more disks
RAID Resync speed limits: lower impact on system performance
Fast Repair: Enable
Enable write cache: Yes on UPS.
Bad sector warning: Enable.

DSM settings:

HDD hibernation OFF.
Memory compresion OFF.
Firewall OFF.
DDOS protection OFF.
Meltdown and Spectre protection OFF.
Activate SSH and maybe change the default port.
Sync the time with your computer and activate autosync with a server.
Install Docker.
You can let ON autoupdate for DSM and apps, but deactivate it for Docker.
Activate the email warnings.
Schedule SMART test once a month.

Hardware choises:

Install nonOEM RAM as much as you can; see link below.
If the NAS supports it, you can install a NVMe drive for cache, for db-es, logs, etc. This is optional, but it can improve the performance of the node.
https://forum.storj.io/t/synology-memory-upgrade-guide/20743

More useful commands after sudo su:

docker exec storagenode ./storagenode run --help
docker exec storagenode ./storagenode setup --help
netstat -tulpn
dmidecode
dockerd --help

Graceful Exit

  1. Machine 1, Node 1:
    SSH to machine, run sudo su, then:
docker exec -it storagenode /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity
y

and enter the satellites names, delimited by SPACE, then ENTER.
To check the status, run:

docker exec -it storagenode /app/storagenode exit-status --config-dir /app/config --identity-dir /app/identity
  1. Machine 1, Node 2:
    SSH to machine, run sudo su, then:
docker exec -it storagenode2 /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity
y

and enter the satellites names, delimited by SPACE, then ENTER.
To check the status, run:

docker exec -it storagenode2 /app/storagenode exit-status --config-dir /app/config --identity-dir /app/identity
1 Like

Looks like these options are ignored.
I copied them from your post, and used for my nodes. Today I checked log sizes with this script:

NAMES=$(docker ps --format '{{.Names}}');
for i in $NAMES; do
  echo ""; echo $i;
  LOG=$(docker inspect -f '{{.LogPath}}' "$i" 2> /dev/null); 
  ls -lh $LOG*; 
done

And here is output:

storj2
-rw-r--r-- 1 root root 49M Aug  1 13:48 /volume3/@docker/containers/943c3f00c7b5c6b13b9a1e3c69abd7012ca8f20e357d79540467b7371dff2f23/log.db

storj4
-rw-r--r-- 1 root root 2.9M Aug  1 13:48 /volume3/@docker/containers/e7518275df2228a4c91e037bd2d542e09e683a5e81f7a5985da016e8a65ca177/log.db
-rw-r--r-- 1 root root 6.8M Aug  1 13:07 /volume3/@docker/containers/e7518275df2228a4c91e037bd2d542e09e683a5e81f7a5985da016e8a65ca177/log.db.1.xz
-rw-r--r-- 1 root root  13K Aug  1 13:48 /volume3/@docker/containers/e7518275df2228a4c91e037bd2d542e09e683a5e81f7a5985da016e8a65ca177/log.db-journal

As you can see, - some logs are 50MB, so max-size is ignored…

I think that’s because Synology by default uses “DB” log driver and I can’t find any documentation for that. Perhaps, I should switch to json logging driver.

Here is Synology Docker daemon config:

cat  /var/packages/ContainerManager/etc/dockerd.json | jq
{
  "data-root": "/var/packages/ContainerManager/var/docker",
  "log-driver": "db",
  "registry-mirrors": [],
  "storage-driver": "btrfs"
}

I’m very new to linux systems. I imagined that too, that those parameters are ignored. Didn’t know how to check. Maybe, if they are put before container name, it would make a difference?

I found right settings for logging on Synology:

--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=5 \

These are the new commands with TCP_fastopen enabled and log level to minimum (fatal).
The other log options, like max size and no. of files are not working on Synology, like SlavikCA says.

MACHINE 1, NODE 1:

docker run -d --restart unless-stopped --stop-timeout 300 \
	--network host \
	-e WALLET="...WALLET..." \
	-e EMAIL="...EMAIL..." \
	-e ADDRESS="...WAN_IP...:28981" \
	-e STORAGE="xxTB" \
	--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj/",destination=/app/config \
	--name storagenode storjlabs/storagenode:latest \
	--server.address=":28981" \
	--console.address=":14011" \
    --server.private-address="127.0.0.1:14012" \
	--log.level=fatal \
	--filestore.write-buffer-size 4MiB \
	--pieces.write-prealloc-size 4MiB \
	--storage2.piece-scan-on-startup=true

MACHINE 1, NODE 2:

docker run -d --restart unless-stopped --stop-timeout 300 \
	--network host \
	-e WALLET="...WALLET..." \
	-e EMAIL="...EMAIL..." \
	-e ADDRESS="...WAN_IP...:28982" \
	-e STORAGE="xxTB" \
	--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj/",destination=/app/config \
	--name storagenode storjlabs/storagenode:latest \
	--server.address=":28982" \
	--console.address=":14013" \
    --server.private-address="127.0.0.1:14014" \
	--log.level=fatal \
	--filestore.write-buffer-size 4MiB \
	--pieces.write-prealloc-size 4MiB \
	--storage2.piece-scan-on-startup=true

In DSM, you should set this script as task to run at boot:

For the Graceful Exit and CLI dashboard, you must specify the new port for server private address:

NODE 1:

docker exec -it storagenode /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14012

docker exec -it storagenode /app/storagenode exit-status --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14012

docker exec -it storagenode /app/dashboard.sh --server.private-address 127.0.0.1:14012

NODE 2:

docker exec -it storagenode2 /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14014

docker exec -it storagenode2 /app/storagenode exit-status --config-dir /app/config --identity-dir /app/identity --server.private-address 127.0.0.1:14014

docker exec -it storagenode /app/dashboard.sh --server.private-address 127.0.0.1:14014

6 posts were split to a new topic: Memory compression OFF - does it help?

Would you like to make the first post a wiki and update it?

@Alexey
It would be best! Thanks!

1 Like

Done. Now you can edit it multiple times.

1 Like