A guide for adding a second node to new HDD on same Linux host

I read through many of the posts that ask about issues with second nodes, but I wanted to verify each step and ensure it was clear to me and others that want step by step guidance. Please correct any mistakes or vagueness I may have.

  1. Choose additional new Internet port (28968) and add to port forwarding rules on firewall/router
  2. Use existing wallet address
  3. Request new authentication token (using same email address)
  4. Install new HDD on system
  5. Create new fstab entry see How do I setup static mount via /etc/fstab for Linux? - Storj Docs
  6. Create new identity - [identity create storagenode2] (where ‘2’ represents a new unique node name)
  7. Authorize new identity - [ identity authorize storagenode2 auth_token ]
  8. Verify new identities
    grep -c BEGIN ~/.local/share/storj/identity/storagenode2/ca.cert [expect response ‘2’]
    grep -c BEGIN ~/.local/share/storj/identity/storagenode2/identity.cert [and expect response ‘3’]
  9. No need to install Docker - it already exists
  10. Run new storage node (include new port ranges, paths and parameters - see example below)
  11. Stop the watchtower: docker stop watchtower
  12. Remove the watchtower: docker rm watchtower
  13. Run watchtower on all nodes [docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode storagenode2 watchtower --stop-timeout 300s]
  14. Add new uptime port monitor for node 2 at https://uptimerobot.com/

To start the new node:

sudo docker run -d --restart unless-stopped --stop-timeout 300 \

-p 28968:28967 \

-p 14003:14002 \


-e EMAIL=“user@example.com” \

-e ADDRESS="domain.ddns.net:28968” \

-e STORAGE=“13TB" \

–mount type=bind,source=“identity-dir”,destination=/app/identity \

–mount type=bind,source=“storage-dir”,destination=/app/config \

–name storagenode2 storjlabs/storagenode:beta

Port Forwarding on my firewall


More detailed instructions are in the standard Storj Installation Steps which this guide is based upon.

Thanks for helping me out!

H/T @kevink


I wouldn’t do that. Instead, remove the old watchtower and run a single watchtower again that will update both nodes:
docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode storagenode2 watchtower --stop-timeout 300s

(btw: --interval is not needed anymore, it is ignored by watchtower now)


Thanks! - I will edit the original post and add it to my working notes!

not that i don’t see that this is practical, but then i think of my 9 hdd’s that i don’t feel can keep up with serving 1 storagenode… not sure that putting two nodes on one drive does you any good at all… at best it doubles your vetting time because its two nodes…on the upside you can really evaluate performance depending on different configurations.

The context here is that I have one 10 TB HDD dedicated to a single node. I am nearly out of space and I want to add another HDD (14 TB) and the best practice is to create a new storagenode for a single HDD. I also plan to add a 3rd HDD and another storagenode to the same computer. As long as my RAM, SATA interfaces and CPU can handle this, I will keep adding HDDs to the same computer. Since the drives only consume about 6 watts, this is the most efficient way to grow the size of the storage farm.


@kevink - I ran the docker ps -a command and see that two watchtower instances are running. Looks like an older version. I assume they are both safe to delete when I update the rig for two nodes.

For multiple nodes, wouldn’t having single watchtower update all nodes at same time? Storj wants randomization when nodes are updating to avoid all nodes going down at same time. Now having separate watchtower for every node makes sense.

sorry not sure how i misunderstood the title when looking at it now.
i doubt your cpu nor ram will be any real limitation, from what i can tell my main issue with keeping up with incoming data is my disks can never keep up… even tho the MB/s is quite low, then the disks have trouble keeping up… ofc adding more nodes will distribute the incoming network traffic between them, so that should help take load of the existing ones or one.

anything in the docker part of the storagenode is basically irrelevant, so long as you got the right image pulled and the right run commands on them…

each can be used a docker rm on it, and it will be removed from docker and doing its run command line will add it back from the existing docker pulled image.
its how you manually update… you shut down the node, do an rm on the docker instance / container or whatever it is… and then you pull a new image and reused the run command to launch it back online…

fairly smooth process actually

Thanks for this post !

I now have two nodes on one RPi4 so hopefully the performance will be OK still!
Certainly looking at the nmon stats the Pi4 is idling in terms of disk I/O, memory and CPU so fingers crossed :slight_smile:


yes you can remove all watchtower instances and start a new one.
Sometimes when watchtower updates itself, it leaves an old instance behind… can be frustrating. but it rarely gets an update.

Yes and no. STORJ wants randomization in updates to avoid all nodes going down at the same time to prevent a single piece of data from being unavailable if all 80 nodes holding that piece go offline at the same time.
But all nodes behind a single IP are considered one node and do not have multiple instances of a single piece of data and are therefore safe to go offline at the same time.


FYI - Just added a second node following these instructions - took 8 minutes of downtime to install the HDD and power up the rig. Used a WD HC530 14 TB I found on eBay. I have another one coming in tomorrow. This seems to have been just in time as I was down to 10% available space on my first node.

1 Like

stole this for my SNO Flight Manual
hope you don’t mind… sadly it’s still very much a work in progress…
expect to be adding some more stuff of my own soon.

1 Like

I just read through your post, I like the idea of aggregated notes - the content indeed seems to be available in the forums and other sites but finding it easily is a bear.

well it works for now, then when it expands a bit ill make it into a pdf or something practical, it’s meant to keep a good selection of practical options that makes SNO’s job easier… so it is a basic manual covering the most common problems one runs into as a SNO… maybe some configuration selections their pro’s and con’s, do’s and don’t, little commands that makes the world of a difference when one needs them.

a manual to keep SNO’s in the cloud xD

1 Like

Some more tips after trying to preconfigure several HDDs/node in advance:

  1. You can only have one unused ‘single use identity token’ outstanding at a time. If you try to request more you simply get the same token until it has been put in use and recognized by the satellites.

  2. Linux does not like to have defined HDDs in fstab during boot that are not functioning, it interferes with the boot process. I use the following in my fstab config per dedicated storagenode - where ‘nofail’ allows the boot process to continue without a drive.

UUID=xxxxx /mnt/sn8 ext4 defaults,nofail,x-gvfs-hide 0 2

  1. I also like the ‘x-gvfs-hide’ option to hide the nodes from the desktop (have to use command line to access)

  2. When preconfiguring for additional drives, I stop after step 6 (creating the new identities) since this can be done without an identity token. Then as I need to authorize new identities, the rest of the process is quick (creating the identities can take quite a while).

… and node being disqualified in a hour, if you do not use the folder for your storagenode on the drive: the node will start from scratch in the empty mountpoint.

True, if the drive was active for Storj, but the scenario here is preparing drives in advance of using them for Storj. I found out the hard way if the device is in fstab and not alive you can’t boot properly.

Then comment out it. When you actually mount the drive - uncomment the line and do

sudo mount -a

If all correct - the drive will be available. Then after any reboot it will be there too.

1 Like