Synology NAS DSM updates

Hi guys,
For SYN users, is it safe and straightforward to update DSM on Synology running SNO?
image

As long as Docker works you will be ok. I’m on this version.

Yeah, but the DSM update itself, won’t it damage the Docker containers or anything?
Is it just update and go?

Yes. I’ve had 3 nodes update with no problem.

1 Like

I would stop the nodes as a precaution with the 300s time out. But if you used that setting in the run command you likely don’t have to. I have the same update still waiting to be installed. Haven’t gotten around to it, but I installed several updates without issues before.

This is my run command:
sudo docker run -d --restart always -p 28969:28967
-p 127.0.0.1:14002:14002
-e WALLET=“0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”
-e EMAIL="xxxxxxxxxxxx@gmail.com"
-e ADDRESS=“xxxxxxxxxxxxxxxxxxxx:28969”
-e BANDWIDTH=“50TB”
-e STORAGE=“7.4TB”
–mount type=bind,source=/volume1/Storj/Dados,destination=/app/identity
–mount type=bind,source=/volume1/Storj/Storage,destination=/app/config
–name storagenode storjlabs/storagenode:beta

You’re missing --stop-timeout 300. Please refer to documentation to update your command.
-e BANDWIDTH is no longer used.

Full example here.
https://documentation.storj.io/setup/cli/storage-node#running-the-storage-node

1 Like

Thank you. I´ll edit it accordingly.

Update done. No issues whatsoever.
Stopped the original container, RM´d it, restart new container with the option above and without the “BANDWIDTH” parameter and did the DSM update.
Up and running again, 10 min unavailable to update and restart the Syn :wink:

1 Like

I’ just had a problem updating my DS918+ to DSM 6.2.3-25426 AND my Storj node stopped working.

I almost certainly concluded that was my fault, but still something worth to share.

Message error :

Start container storagenode failed: {“message”:“invalid mount config for type “bind”: bind source path does not exist: /root/.local/share/storj/identity/storagenode”}.

I just forget about a common behavior of Synology upgrades of “contents of /root folder deleted on DSM upgrade”

See here : https://community.synology.com/enu/forum/17/post/76360

I followed the default installation process, so many more could have the same problem.

I ran to my identity backup and could safe recover.

You might want to move the identity elsewhere and adjust the docker command accordingly.

I always keep separate folder on my volume: 1 for Identity and another for Data.
As a best practice.

Yes like others said, please use a shared folder that is also visible from the Synology interface. Everything outside of those is considered system files and could be removed on update.

1 Like

The original procedure of install (LINUX) could have a small note regard that … or it’s to specific for Synology users? @Alexey maybe can help.

This is specific to Synology users. They have a tendency to just remove anything they don’t recognize in folders they don’t think end users should be using. Normal linux installs won’t run into this problem. But the instructions already say people should create the identity on another faster machine, so almost nobody would be running it on the Synology to begin with. And when you then copy it to the NAS, you would automatically choose one of the shared folders. So I think this issue is kind of fringe and not many users would run into it to begin with.

Also, as the instructions also say ALWAYS backup your identity. Luckily you followed that instruction and that saved your node!

1 Like

It wasn’t “luckily”, I’m an IT :wink:

Updated, no issues.

1 Like

You should point that to something like volume\storJ folder etc… never root.

--mount type=bind,source="/volume1/storj/identity-dir",destination=/app/identity \
--mount type=bind,source="/volume1/storj/storage-dir",destination=/app/config \

why port 28969 to 67 ?
it should be just -p 28967:28967 \

1 Like

You can use any port as long as your forward it back to 28967 inside the container. This is especially useful if you want to run multiple nodes on different disks. I have 3 nodes running that way currently.

1 Like