Node status offline - last contact 17701289h 59m ago

Dear All,
I recently restarted the linux server and after it my node shows this:

STATUS Offline
LAST CONTACT 17701289h 58m ago
VERSION v1.3.3

please note that it is running in LXC container.
I have checked the port and it seems that 28967 is closed, although i have not changed anything on my router.
can it be offline because of the closed port?
if yes, could you please recommend any hints where to continue the troubleshooting?
thank you in advance

this is in the log:

IP server or LXC has not changed?

IP is the same, i can ping it, node dashboard loads fine, it just says status offline :frowning:
all my other lxc containers work, except storj. (zoneminder, openvpn, pihole, transmission, …)

what os are you using proxmox?

i use ubuntu linux 18.04 as the base system.

not sure if it counts, but shouldn’t it listen on tcpv4? and not tcpv6
root@storj:~# netstat -ano | grep tcp
tcp 0 0* LISTEN off (0.00/0/0)
tcp 0 0* LISTEN off (0.00/0/0)
tcp6 0 0 :::14002 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::22 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::28967 :::* LISTEN off (0.00/0/0)

i just found proxmox often tends to apply network configuration changes after a reboot… so something i changed a week or multiple weeks ago will suddenly f everything.
mainly because i end up setting a lot of the interfaces configurations manually and i’m kinda new to linux… so that usually goes real smooth… lol

depends on what your isp / home network is using and such… but yeah usually ipv4 is most commonly used i think… but both should work…

also kinda depends on what your router is sending to it…
i mean you would have opened up a port so send to it and there you would have defined which port and protocol it should be using.

if you aren’t using the ddns thing…

it was working before, so need to figure out what changed…

well going through the configuration documentation from storj can sometimes be a help… but might not in this case i fear…

well if it came after a reboot and not before then we can exclude the external network.
that means you have your firewalls and vm network setup… maybe some routing depending on your setup.

checking that the vm got internet is ofc a good start…and like you already said you can ping the vm… so that means it has to be either have a wrong ip and then you are pinging it at wrong one…
or network configurations that hadn’t taken effect before the reboot… maybe if you added more network cards on the vm or something like that could mess it up… i use my static ip of a dedicated nic which i route my vm though… and then in the docker run command i use the same static ip address… in the storagenode docker run command

that way there isn’t any room for that to be misaligned… then its down to the ports being open.

ohh dear… long story short:
unifi USG was the issue.

I have a Unifi USG router/firewall and it is managed from server “B” (unifi controller)
I restarted my server “A” and it also had the unifi controller → server “A”'s unifi took control and all my devices were unavailable…

now I stopped server “A”'s unifi service and viola, it works, my node is up and running :slight_smile:
thanks all

1 Like