Second or new node setup completed, but still offline

not sure where this kind of bug reporting is suppose to go… so it went here…
already solved this so … don’t bother even tho i’m sure some will :smiley:

was setting up a second node using port 28968, i did include the port in the run command as instructed, however after some troubleshooting my second node that didn’t want to go online, i found that i had to change the config.yaml line

# public address to listen on
server.address: :28967

to being

# public address to listen on
server.address: :28968

not sure if that’s oversight or purposely done so… if it’s suppose to be like that on purpose, maybe it should be described in the documentation that the port in the run command and or config.yaml needs to be the same or changed depending on the setup…

That’s hard coded hence it shows up in config.yaml. There are 2 options 1 that you already pursued and second is deleting config.yaml file and restarting the node. This will create a newer config.yaml file incorporating new data.

PS: You can mark your post as the answer too.

1 Like

didn’t have the option for setting solution on, not that i spotted anyways… i did look for it…

ah right i forgot about the deleting the config.yaml option

maybe this happened because i didn’t get everything right at the first try… but i think i tried running it with the right port number from the beginning, so don’t see why it would even create the initial config.yaml with the wrong port…

especially if it can create it right if i delete it… but wasn’t a smooth launch so maybe something went wrong…

I thought this was overridden by whatever was included in the docker run command via the entrypoint script? I recently set up a second node on port 28968 without any issues. In fact the config.yaml file still has the entry server.address: :28967. Node has been running flawlessly on port 28968.

sure didn’t work for me, but i do run docker in a container but i don’t see how that should really affect it…

sure didn’t want to work before i changed it…

I certainly believe you, but I suspect it is something specific to your setup. Changing the config file does involve a node restart, so technically that restart could have affected it. I don’t think it is a universal problem though as lots of people have been running multiple nodes and I haven’t seen this specific problem/solution being brought up before. And, as mentioned, doesn’t align with my experience.

1 Like

i do have specified a specified ip address in the run command, maybe that affects it… only related thing i can think off that might do something like this…

something which i didn’t need to do because the container actually only have 1 ip address, but i basically copied my run command from my main node which runs on the host with many ip addresses and thus i had to specify the ip address i wanted it to use, because i wanted it to utilize a dedicated NIC

I can confirm that with docker setups you don’t have to make any changes at all to the config.yaml. So I recommend not touching it at all. It doesn’t matter whether you run it inside a VM or another container (although… Why would you?). Everything you need to change for the second node is in the run command. Make sure you never use the same external port or the same identity as your old node though. That’s a great way to get them both disqualified.

hehe yeah the port or identity thing i guess would be bad… tho i would say that’s a new software thing… i suspect one day there will be safeguards against that… never the less good advice…

i couldn’t get it to work without changing the config.yaml to the correct port number…
i suspect it has to do with the run parameters i’m using… i have thought about trying to remove the specified ip address in the run command to check if that was what caused the inability to get online.

it is kinda odd tho, because the run command defines both the ip and the port number, but the config.yaml only defines the port number, so that would mean that i get the ip from the run command and the port from the config.yaml… which is weird… but yeah haven’t had time or wanted to dig into it… it works now… and got to much other stuff to deal with.

i run docker in a container so that i can keep separate statistics in my proxmox “hypervisor” no other reason really…it just looks better and everything is monitored in one interface.

for now i’m just verifying that it works and troubleshooting, still have some errors popping up on the server, even tho everything seems to work… the new storagenode runs fine…
something to do with docker overlayfs which is feckered

alas i digress…

It does, you’re confusing the contact.external-address with server.address. The first one specifies the external address that’s also set in the run command. This is used by the satellite to find your node. The second specifies what address the node listens to. You can specify an IP to listen to a specific interface, but it’s usually omitted so it listens to all interfaces. Since you had to change the port for that last address I’m almost certain you specified -p 28968:28968 instead of -p 28968:28967 in the run command. Next time when asking for help it would really help to post that run command as this error would have been spotted instantly by a lot of people on this forum.

1 Like

didn’t know what the port numbers meant exactly and didn’t bother looking it up… just changed all of them from 28967 to 28968 and configured every network thing to 28968

is maybe a bit counter intuitive that the storagenode port is changed in the config.yaml… but i suppose thats a docker thing and the other is related to the docker image… so i suppose that makes sense now that i think about it…

didn’t take me more than 15-20 minutes to figure out, been using the config.yaml before so was an obvious place to look… going to the forum for help is like cheating lol i do that when i give up…

i can see it is also define like that in the documentation… i think it’s weird and it does also make it much more likely that people would confused the external ports with the internal ports and thus be able to get their nodes DQ if they ran on the same port…

been thinking that maybe i shouldn’t really have used docker… it was just what it said was the recommended method when i installed it… but for my setup, it seems like a bit of an awkward way to set it up.

ofc im finally getting to that conclusion now that i’m basically done getting everything to work like i wanted lol… but hey i’ve gotten pretty decent at tinkering with docker … so thats a plus

You should be using docker for now, since there is currently no other way to update automatically.

But, you should either know what the ports mean or follow the guides available. Otherwise you’re just shooting in the dark and there are a few too many ways to screw that up.

You shouldn’t touch the config.yaml if you use a docker version. The container do not change any port in the config.yaml:

Your environment variables specified with -e options just passed as runtime parameters of storagenode run.

well i don’t want my second node to run on 28967 because it’s stupid not to run it on a different port even if it’s inside docker, or where ever, i want it to run on the same port all the way through just for simplicity’s sake.

so it’s fine as it is…

@Alexey
why shouldn’t i change stuff in the config.yaml its where i change all the other things that i do when i make a node… like setting log level to debug
so give me a good reason why…

yes the port had to be set to 28968 in the config.yaml for a command like with only port 28968 being used.

You shouldn’t change anything for the ports in the config file. All basic setup can and should be done through the run command. Sure this alternative works, but by using alternative setups, you’re on your own with bug hunting should issues arise. Help given on the forum won’t match your setup any longer. Recovering your config by simply deleting the config.yaml and letting the node recreate it will no longer work. Other instructions and documentation will assume your setup is as normally instructed too.

It’s generally just a bad idea to deviate from standard setups, ESPECIALLY when…

it’s a simple port number, relax… i’m sure it will be fine
not like it’s something advanced i’ve changed, and i would still argue that it’s stupid to run on the same port for multiple services that aren’t suppose to overlap, just makes the ability to keep track of where stuff is going even more difficult.
i might change the ports further tho… so that i have a system of each ip, container name, storagenode name and port reflect the same number. like say putting the main storagenode on an ip ending in 67… ofc then i would have to call the storagenode67…
which also seems kinda stupid.
ofc those will exist inside docker so that would be one container for all storagenodes most likely, so would only have one ip… so it’s just about what easy to remember and keep track off when i get back to this stuff after a year or whatever… still a work in progress
nothing worse than an advanced setup made even more advanced by confusing configurations.

You asked, I answered… very relaxed, I can assure you.