Node at 0% online

Running storj via docker
~/storj_success_rate $ for item in curl -sL http://localhost:14002/api/sno | jq '.satellites[].id' -r; do curl -s http://localhost:14002/api/sno/satellite/$item | jq ‘{id: .id, auditHistory: [ | select(.totalCount != .onlineCount)]}’; done
“id”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”,
“id”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”,
“id”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”,
jq: error (at :1): Cannot iterate over null (null)
“id”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,
“auditHistory”: [
“windowStart”: “2022-05-29T12:00:00Z”,
“totalCount”: 1,
“onlineCount”: 0
“id”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”,
“auditHistory”: [
“windowStart”: “2022-05-31T12:00:00Z”,
“totalCount”: 1,
“onlineCount”: 0
pi@Arobot:~/storj_success_rate $ sudo ./
========== AUDIT ==============
Critically failed: 0
Critical Fail Rate: 0.000%
Recoverable failed: 0
Recoverable Fail Rate: 0.000%
Successful: 6
Success Rate: 100.000%
========== DOWNLOAD ===========
Failed: 10
Fail Rate: 4.484%
Canceled: 45
Cancel Rate: 20.179%
Successful: 168
Success Rate: 75.336%
========== UPLOAD =============
^[[A^[[ARejected: 0
Acceptance Rate: 100.000%
---------- accepted -----------
Failed: 21
Fail Rate: 0.049%
Canceled: 3469
Cancel Rate: 8.071%
Successful: 39491
Success Rate: 91.880%
========== REPAIR DOWNLOAD ====

The first command shows 2 that where 0 for online
But the 2nd one says 5 successful audits no failed

@Alexey could you help ?

Hello @1qazx ,
Welcome to the forum!

You need to fix an issue with connectivity. Something blocking requests to your node, either firewall, or your router or your ISP.
You should not block incoming traffic to node’s port, you should not block any outgoing traffic from your node and any port to any port and any host.

Is your node online?

1 Like

Yes it’s online and all other satellites say 100% online

Also portfowarding is configured

Also today I checked ufw and it was blocking something from its own Mac address (so I have disabled it to test) but could this cause the issue also why would only one satellite be affected

Two satellites are affected. They requested audit from your node, but your node did not respond.

1 Like

I did notice that but my dashboard only shows eu1 is online 0%
All others 100%

If you disabled ufw (which is not good idea, it’s better to properly configure it, allowing access to 28967 tcp, 28967 udp from the internet and 14002 tcp from the local network and localhost), then you need only wait for the next audits from satellites.
To have audits, your node should have data from the customers of these satellites, but your node seems too young to have noticeable amount of data.

1 Like

That is how I configured ufw and it still was blocking something from its own Mac address

Also yes the node is young only like just over a week I also read here if age <30days any small downtime could make online go to 0% is this correct

My ufw config is as below
To Action From

28967/tcp ALLOW Anywhere
28967/udp ALLOW Anywhere
5900 ALLOW
28967/tcp (v6) ALLOW Anywhere (v6)
28967/udp (v6) ALLOW Anywhere (v6) 14002/tcp 14002/udp also allowed now may that if been my issue

Yes that is correct. We have a safety check in place. The first downtime suspension requires 30 days of history. So your current 0% will not have any negative effect but ofc you should continue investigating in order to avoid downtime suspension down the road.

1 Like

I believe the first command is showing the satellite point of view. Every few hours your storage node will ask the satellite for an update. So the data you see there is potentially a few hours old. I am also not sure how accurate it is for the current ongoing 12 hour window. The final result might change at the end of the 12 hour window.

The second script will check the storage node logs which makes it kind of useless for your problem. If you are missing an audit how should it show up in the storage node logs? The second script is more for audit failures. The satellite will tell you about audit failures as well but with a time delay. It makes sense to also check the storage node logs for failures. Shorter feedback cycle and the option to dive deeper into it :slight_smile:


Thankyou and I do plan to minimise my downtime from now on to