I read through many of the posts that ask about issues with second nodes, but I wanted to verify each step and ensure it was clear to me and others that want step by step guidance. Please correct any mistakes or vagueness I may have.
Choose additional new Internet port (28968) and add to port forwarding rules on firewall/router
Use existing wallet address
Request new authentication token (using same email address)
Create new identity - [identity create storagenode2] (where ā2ā represents a new unique node name)
Authorize new identity - [ identity authorize storagenode2 auth_token ]
Verify new identities grep -c BEGIN ~/.local/share/storj/identity/storagenode2/ca.cert [expect response ā2ā] grep -c BEGIN ~/.local/share/storj/identity/storagenode2/identity.cert [and expect response ā3ā]
No need to install Docker - it already exists
Run new storage node (include new port ranges, paths and parameters - see example below)
Stop the watchtower: docker stop watchtower
Remove the watchtower: docker rm watchtower
Run watchtower on all nodes [docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode storagenode2 watchtower --stop-timeout 300s]
I wouldnāt do that. Instead, remove the old watchtower and run a single watchtower again that will update both nodes: docker run -d --restart=always --name watchtower -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storagenode storagenode2 watchtower --stop-timeout 300s
(btw: --interval is not needed anymore, it is ignored by watchtower now)
not that i donāt see that this is practical, but then i think of my 9 hddās that i donāt feel can keep up with serving 1 storagenodeā¦ not sure that putting two nodes on one drive does you any good at allā¦ at best it doubles your vetting time because its two nodesā¦on the upside you can really evaluate performance depending on different configurations.
The context here is that I have one 10 TB HDD dedicated to a single node. I am nearly out of space and I want to add another HDD (14 TB) and the best practice is to create a new storagenode for a single HDD. I also plan to add a 3rd HDD and another storagenode to the same computer. As long as my RAM, SATA interfaces and CPU can handle this, I will keep adding HDDs to the same computer. Since the drives only consume about 6 watts, this is the most efficient way to grow the size of the storage farm.
@kevink - I ran the docker ps -a command and see that two watchtower instances are running. Looks like an older version. I assume they are both safe to delete when I update the rig for two nodes.
For multiple nodes, wouldnāt having single watchtower update all nodes at same time? Storj wants randomization when nodes are updating to avoid all nodes going down at same time. Now having separate watchtower for every node makes sense.
sorry not sure how i misunderstood the title when looking at it now.
i doubt your cpu nor ram will be any real limitation, from what i can tell my main issue with keeping up with incoming data is my disks can never keep upā¦ even tho the MB/s is quite low, then the disks have trouble keeping upā¦ ofc adding more nodes will distribute the incoming network traffic between them, so that should help take load of the existing ones or one.
anything in the docker part of the storagenode is basically irrelevant, so long as you got the right image pulled and the right run commands on themā¦
each can be used a docker rm on it, and it will be removed from docker and doing its run command line will add it back from the existing docker pulled image.
its how you manually updateā¦ you shut down the node, do an rm on the docker instance / container or whatever it isā¦ and then you pull a new image and reused the run command to launch it back onlineā¦
I now have two nodes on one RPi4 so hopefully the performance will be OK still!
Certainly looking at the nmon stats the Pi4 is idling in terms of disk I/O, memory and CPU so fingers crossed
yes you can remove all watchtower instances and start a new one.
Sometimes when watchtower updates itself, it leaves an old instance behindā¦ can be frustrating. but it rarely gets an update.
Yes and no. STORJ wants randomization in updates to avoid all nodes going down at the same time to prevent a single piece of data from being unavailable if all 80 nodes holding that piece go offline at the same time.
But all nodes behind a single IP are considered one node and do not have multiple instances of a single piece of data and are therefore safe to go offline at the same time.
FYI - Just added a second node following these instructions - took 8 minutes of downtime to install the HDD and power up the rig. Used a WD HC530 14 TB I found on eBay. I have another one coming in tomorrow. This seems to have been just in time as I was down to 10% available space on my first node.
stole this for my SNO Flight Manual
hope you donāt mindā¦ sadly itās still very much a work in progressā¦
expect to be adding some more stuff of my own soon.
I just read through your post, I like the idea of aggregated notes - the content indeed seems to be available in the forums and other sites but finding it easily is a bear.
well it works for now, then when it expands a bit ill make it into a pdf or something practical, itās meant to keep a good selection of practical options that makes SNOās job easierā¦ so it is a basic manual covering the most common problems one runs into as a SNOā¦ maybe some configuration selections their proās and conās, doās and donāt, little commands that makes the world of a difference when one needs them.
Some more tips after trying to preconfigure several HDDs/node in advance:
You can only have one unused āsingle use identity tokenā outstanding at a time. If you try to request more you simply get the same token until it has been put in use and recognized by the satellites.
Linux does not like to have defined HDDs in fstab during boot that are not functioning, it interferes with the boot process. I use the following in my fstab config per dedicated storagenode - where ānofailā allows the boot process to continue without a drive.
I also like the āx-gvfs-hideā option to hide the nodes from the desktop (have to use command line to access)
When preconfiguring for additional drives, I stop after step 6 (creating the new identities) since this can be done without an identity token. Then as I need to authorize new identities, the rest of the process is quick (creating the identities can take quite a while).
ā¦ and node being disqualified in a hour, if you do not use the folder for your storagenode on the drive: the node will start from scratch in the empty mountpoint.
True, if the drive was active for Storj, but the scenario here is preparing drives in advance of using them for Storj. I found out the hard way if the device is in fstab and not alive you canāt boot properly.