2nd node on 2nd machine


I’m trying to set up a second node on a different computer using the windows GUI. My understanding is that it should be on a different port like 28968. Should I then use mystaticIP:28968 in the GUI installer?

Should I forward 28968 to 28967 with my router on the new node’s machine? I tried different combinations without success. The StorJ Node service always failed to start.



you can make both ways, switch 28968 to 28967 ti second pc, or make 28968 to 28968, but then you need setup

# public address to listen on
server.address: :28968

in config file.
just default is 28967, then it not listen other ports.

Hello @geosimsimma,
Welcome to the forum!

Yes, you should use the external_address:28968 in the GUI setup, yes you should forward 28968 to the 28967 and IP of your second PC. Or you need to make changes in the config.yaml to listen on other port and forward the 28968 to that other port, as noted by @Vadim
You also should allow an inbound traffic to the 28967 TCP port (or which you would use in the config if you decide to choose a second option) in the Windows firewall.

Please, post last 20 lines from the log to see a reason why the service is failing to start, it could be not related to ports: https://documentation.storj.io/resources/faq/check-logs

Hi Alexey,

Thanks for the warm welcome. I found the problem by checking the logs. It was a typo in the static IP address. Thank you very much for the fast support!

Happy new year!

i edited the config.yaml until i was advised to set it up in the run command instead…
works pretty well and saves jumping through a few extra hops sometimes…
this way the internal docker storj connection is still on the default port and the external lan connection coming from the router is whatever you write as the first port after the ip address of the node in the run command.

docker run -d --restart unless-stopped --stop-timeout 300 -p

not sure which way i perfer, but the run command gets changed the most, so atleast this way the changes are made in one location and thus it’s a bit less confusing to troubleshoot…

i think @Alexey recommended this to me

I generally like the binary files since I can use the config directly and make any kinda changes and just restart the node and done, No need to remove the container and restart it again.

that’s a very good point, i’m still in the process of getting to grips with how my network is going to look, nodes names and such… for now i think i see an advantage for me using the run command method…

but like with most things there aren’t really an single answer… whatever works
certain would be nice if i could simplify the run commands… but i’ve also been changing them so often that i’m not sure…

the docker rm storagenode does kinda pop up alot :smiley:
maybe that’s my answer… should do a race try to use both setups and see which one wins out.

Well the docker run command is pretty fast comparable and less mistakes can be taken, Compared to running with the binary files which are faster if you know exactly what your doing.

I kinda got tired of having to stop the docker container then removing the containing having to remember my command line for starting the node back up…

i’m still kinda new to the whole linux thing… i finally got the \ to break the lines down now…
thus far my run command has been this half a kilometer command line i mostly just copy pasted.

ofc that also depends on how everything is setup… sometimes when i end up in only a linux console i’m still kinda lost… yesterday i failed accessing a usb flashdrive on my server.

i’m sure i could have figured it out, but i’m sure there was a much simpler route than doing a mount and everything.

Yeah all my linux are headless my mainserver is only command line, my windows server is only command line as much as I love GUI is so much more taxing.

i will agree there are certain advantages for commandline… but same is true for GUI.
kinda like having both, cannot imagine how i would get anything done with only a commandline… i mean my server is “headless” but only because it’s to old to run GUI xD
or atleast i’ve tried to install it twice and failed because it’s antique, but the web interface in proxmox works pretty good… not perfect, but makes many things simpler and i most often end up in the console anyways…
so not like i’m not learning… just still have training wheels on lol
and i would argue that i get a ton more done with GUI’s and multiple monitors… using a single monitor and being stuck in a console sends shivers down my spine lol

I think that is everyones first experience to no GUI both do have there use cases though your right about that. No GUI for desktop but most run a backend web server so you can still do everything you need to do though a webpage.

You can use scripts or docker-compose.yaml

I was going to say the same thing. All my Docker services are orchestrated via docker-compose and stored in my local Gitea repo.

Overall, I like both the Docker and native versions of the storagenode service. The native storagenode-updater provides a more controlled update rollout across all nodes rather than depending on watchtower for Docker nodes. But for me, the Docker storagenode fits nicely alongside other services I host on my servers.

We plan to integrate an updater into the docker images as well. It should update a binary inside the docker container the same way as a binary version.
So, we would have a base image, which will be auto updated as soon as it’s started, then it should keep updated the storagenode and the updater itself.
However it’s better to run a watchtower too - if we would update the base image, the watchtower should take care of it.
This should eliminate the needs to migrate to the binary version for those operators who likes the docker version and makes possible to keep storagenode updated the same way as a binary and with the same cadence.


This is really positive news. I think the having both watchtower for base image updates + internal storagenode-updater updates is a really solid combination. Looking forward to it, thanks!