Storagenode does not start (network connectivity problem )

It looks like my current Internet provider (AT&T) is blocking NTP in both directions allegedly to prevent Reflective DDOS attacks.

Had to reboot my system (due to StorJ storagenode docker container leaking memory and ending up allocating over 32GB of RAM!), but now the node does not want to come up anymore, because it fails its pre-flight check (log image attached).

Does anybody have any idea how to solve this problem?

Thank you

you could try over https:
http://www.vervest.org/htp/

Thank you for the pointer, BUT how should I convince the storagenode to do the same for its sanity check? It looks like the node is doing NTP requests, isn’t it?

Thanks

This looks like a connectivity issue. Please, check that your firewall or provider doesn’t block the outgoing traffic to different ports, not only NTP.
Please, use this checklist to troubleshoot:

Thanks Alexey for your reply, actually (of course) it was my bad.

The default route got switched when I had to restart the node and Docker was trying to use a NIC that is not facing (nor have access) outside.

My NTP problem with AT&T persist, but the problem was indeed connectivity related (the node could not get the list of trusted satellites). Switching the NIC priority solved the issue.

Please mind that the restart of the system was due to the node eating up to 32GB of RAM. Was impossible to kill or forcefully remove and I had to restart the whole machine to get back the system to sanity (currently running the storagenode v1.1.1 inside Docker).

Thanks for helping!

Such huge usage of RAM indicates the slow storage. Are you using any kind of network storage?
The SMB and NFS are not compatible with the storagenode, the only supported network protocol is iSCSI. But even then it’s slower than a local connected drive.

The node is running inDocker on a Synology NAS. Access to the disk pool is direct as the dedicated directory is bind mounted. The underlying filesystem is Btrfs.

Had no problem whatsoever in the past months. The problem occurred only starting yesterday. Only realized only today when the NAS came on its knees and all other services started to be unresponsive.

The docker container was showing a usage of RAM (roughly 2.5GB) higher, but when I killed it (should say I tried!) it freed more than 20GB, hence my doubt about a memory leak. Usually the node does not consume more than 1.9GB.

After trying to kill the node I had to restart the whole NAS because Docker could not kill the container in any way, nor stop it (even though the processes inside where dead/zombied).

how is this a NTP or storj problem? your clock is incorrect. fix that. how would “both directions” be a problem? are you saying storj wants YOU to be a NTPd?

The clock was correct (BUT the NTP server was not reachable), and the problem (if you read the previous answers/posts) was related to network connectivity rather than NTP. The node could not reach satellite nodes to confirm time correctness.

Removed NTP from the title

1 Like