I have just succeeded in installing storagenode v 1.19.4 on my QNAP TS-431 NAS. For the benefit of all those in a similar situation, i.e. a low-end NAS that can’t run Docker, the setup is documented on this page.
First up: if your NAS can run Docker (Container Station) then don’t read any further, as that setup is a lot simpler and easier to maintain.
Prerequisites:
- QNAP NAS with a supported CPU.
- Some experience with the Linux command line.
- SSH access to your NAS.
- NAS must have access to the net to download stuff.
Follow the Storj documentation for the generation of an ID, which you should generate on your PC and then transfer to the NAS. When you get to the step “CLI install” stop and come back here.
Go to https://qnapclub.eu/en/howto/1 to install the QNAPClub repository.
In Appcenter open the new repo and install Entware-std.
(Note: at the time of this writing there is a problem with the store, preventing downloading and installation via Appcenter. If this is still the case you need to download the package to your PC (click app icon, then download) and install manually from there. In Appcenter in the top right corner click the + between the refresh and gear Icons.)
Open an SSH session.
Update Entware: opkg update
Optional, but recommended:
- install nano editor:
opkg install nano
(unless you are comfortable using vi.) - install logrotate:
opkg install logrotate
Decide on a directory structure. To be able to run more than one node on the same NAS I have settled on the following:
/share/CACHEDEV1_DATA/sn/
<- installation base dir
/share/CACHEDEV1_DATA/sn/bin/
<- executables
/share/CACHEDEV1_DATA/sn/sn1/
<- config, id & logs for node1
/share/CACHEDEV1_DATA/storj/
<- data for node1
Note: /share/CACHEDEV1_DATA/
happens to be the path for my NAS’ system volume, i.e. where you install apps and where you find Public, Web, etc… This may be different on yours, so find out first!
The `df` command will help you:
[~] # df -h
Filesystem Size Used Available Use% Mounted on
none 60.0M 47.3M 12.7M 79% /
devtmpfs 233.0M 12.0k 233.0M 0% /dev
tmpfs 64.0M 408.0k 63.6M 1% /tmp
tmpfs 239.6M 8.0k 239.6M 0% /dev/shm
tmpfs 16.0M 1.1M 14.9M 7% /share
/dev/md9 509.5M 283.6M 225.8M 56% /mnt/HDA_ROOT
cgroup_root 239.6M 0 239.6M 0% /sys/fs/cgroup
/dev/mapper/cachedev1 3.5T 2.9T 666.1G 82% /share/CACHEDEV1_DATA
/dev/mapper/cachedev4 3.5T 3.5T 0 100% /share/CACHEDEV4_DATA
/dev/mapper/cachedev5 3.6T 3.6T 336.0k 100% /share/CACHEDEV5_DATA
/dev/mapper/cachedev7 3.6T 3.5T 43.0G 99% /share/CACHEDEV7_DATA
/dev/sde1 1.8T 1.6T 186.5G 90% /share/external/DEV3303_1
/dev/md13 433.0M 329.7M 103.2M 76% /mnt/ext
Now go ahead and create the directories you have decided on.
Change into the bin
dir.
Go to the download page and get the url for the latest version for your CPU architecture.
What type is my CPU?
[~] # uname -m
armv7l
This indicates a 32bit ARM CPU. I think we can safely assume that if you have any other type you will be able to run Docker instead of this…
The file you want most likely is storagenode_linux_arm.zip. Copy the url (right-click …).
Switch back to your SSH session and (in your bin dir) download the file with
wget pasted_url_to_storagenode_linux_arm.zip
Once the download is finished you unpack the archive:
unzip storagenode_linux_arm.zip
You can now delete the zip file.
For multi-node setup (optional)
Create a link to the executable for each node you want to run:
ln -s storagenode storagenode1
repeat for storagenode 2, etc.
The contents of your bin will then look sth. like
[/share/CACHEDEV1_DATA/sn/bin] # ll
drwxr-xr-x 2 admin administ 4.0k Jan 12 20:44 ./
drwxr-xr-x 4 storj storj 4.0k Jan 11 16:55 ../
-rwxr-xr-x 1 admin administ 24.1M Jan 5 04:13 storagenode*
lrwxrwxrwx 1 admin administ 11 Jan 12 20:44 storagenode1 -> storagenode*```
Next, run setup to create necessary files and dirs. Adjust your dirs in the example:
./storagenode setup --config-dir </path/to/config/dir> --identity-dir </path/to/identity/dir>
Don’t type the brackets <> and use absolute paths, starting with /.
As per the official documentation, edit the config.yaml
.
Make sure to change this line
console.address: 127.0.0.1:14002
into this:
console.address: :14002
or you will not be able to access the dashboard.
Create a start/stop script, one for each node:
nano storj1.sh
I put mine in the sn
dir, but you can put it wherever you want.
Copy & paste this into your file:
#!/bin/sh
#
# This shell script takes care of starting or stopping storj daemon
#
# See how we were called.
case "$1" in
start)
# Start daemon.
/sbin/daemon_mgr storagenode1 start "/share/CACHEDEV1_DATA/sn/bin/storagenode1 run --config-dir /share/CACHEDEV1_DATA/sn/sn1 --identity-dir /share/CACHEDEV1_DATA/sn/sn1 &" >/dev/null 2>&1
;;
stop)
# Stop daemon.
/sbin/daemon_mgr storagenode1 stop "/share/CACHEDEV1_DATA/sn/bin/storagenode1 &" >/dev/null 2>&1
;;
restart)
$0 stop
$0 start
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
Adjust the paths in the start and stop lines to suit your setup. The 4 instances of storagenode1
have to match whatever node you are running, so if you didn’t create the link(s) for multi-node setup this should read storagenode
. Save and exit.
Make sure the script has execute permissions!
Link the script(s) into the Entware init system dir:
ln -s /opt/etc/init.d/S90storj1 storj1.sh
(storj2.sh
for the 2nd node, etc.)
Check that the link command was successfull:
[/share/CACHEDEV1_DATA/sn] # ll /opt/etc/init.d/
drwxr-xr-x 2 admin administ 4.0k Jan 11 17:09 ./
drwxr-xr-x 7 admin administ 4.0k Jan 9 16:57 …/
-rwxr-xr-x 2 admin administ 594 Jan 11 16:53 S90storj1* -> /share/CACHEDEV1_DATA/sn/storj1.sh
-rw-r–r-- 1 admin administ 2.8k Oct 27 00:59 rc.func
-rwxr-xr-x 1 admin administ 966 Oct 27 00:59 rc.unslung*
How it works:
The script you created is used to start and stop or restart the storagenode daemon, e.g.:
/opt/etc/init.d/storj1 start
When the NAS boots it starts up all pre-configured daemons in /etc/rcS.d/
in numerical order. One of these is Entware, which in turn starts anything in its init.d
dir with file names starting with S00.
Why can’t we just put our start scripts into /etc/rcS.d/
? Because almost the entire root file system is re-created at every boot, obliterating any changes made to it. You could mount the hidden file system from which it is copied and make changes there, but next time QTS is updated that, too, is wiped.
The script uses QNAP’s daemon_mgr to monitor the sn. If it stops for any reason (other than through damon_mgr) it will be restarted. This means that if it fails for a good reason daemon_mgr will restart it continuously…
For the first run we don’t use the script, so we can see any error messages. Open a 2nd ssh window and change into the dir where the log file will be, sn1
in my case. Type these commands:
touch storj.log
<- this must match the name of your log file you gave it in the config.
tail -f storj.log
<- ditto
Nothing will be displayed here for now. Switch back to your other ssh window and start your node:
./storagenode1 run --config-dir </path/to/config/dir> --identity-dir </path/to/identity/dir>
If you get error messages here you need to fix them before continuing. If your node starts nothing will be displayed and you don’t get your prompt back. Check the other window for activity in the log file.
If all is well go back to your command window and stop the node by pressing Ctrl-C. You will get your prompt back.
Start the node normally:
/opt/etc/init.d/S90storj1
Again, there will be no output, even if there are errors and the prompt returns, but you can see the node starting up again in the log window. (Cancel the log watch with Ctrl-C.)
To do:
- (Log rotation) - done
- Automatic updates (I may need help with this)
Comments, questions, corrections, etc. are welcome.
My sincere thanks go to users Mousetick and OneCD in the official QNAP forum. Without their help this would not have been possible for me to do.
Link to the topic: https://forum.qnap.com/viewtopic.php?f=160&t=158732 (registration required)
Kind regards,
Peter.