Suspension and audit

Your node is failing to answer, not satellite. The satellite is trying to reach your node:

This suggests problems either with your router (it cannot keep the load for example) or your ISP.
This graph only confirms that:

This is doesn’t mean that you will not have IP, this service is similar to your current DDNS - it gives you hostname, which you should update with your current public IP. It will not give anything else. So their slogan just mean that you would not need to use the pure IP anymore :slight_smile: - “No IP”, you will use their hostname instead.

So this means I have to replace my router? I used rogers so unless they are having issues in the network their network should be stable.

How can I find out issue in router vs isp? So I can take the right steps to fix the problem.

While this issues is fixed. What do you suggest keep the node running? Eventually more audits will fail and node will be taken off the network I assume.

It’s pretty interesting to see it goes down for 2 mins only after every few hrs.

Your node doesn’t fail audits, it doesn’t answer on them - it’s a big difference. In case of audits failures your node would be quickly disqualified, in case of drop in online score your node would be suspended and can recover if it would be online for the next 30 days.

If you can connect Internet directly to your device with storagenode you can check the connection - if it was your router, you will not have downtime anymore.
Please, enable/setup firewall on your device before this check!

This is actual connected to the router given by ISP. There is no way to bypass this.

I will talk to ISP and show them uptimebot logs

thanks, man for all the guidance and help. how can you explain this? All the satellites were able to connect except these two. I am trying to ensure this is in fact ISP issue.

2021-09-08T23:13:09.873Z ERROR contact:service ping satellite failed {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 1, “error”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout”, “errorVerbose”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-09-08T23:13:09.898Z ERROR contact:service ping satellite failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout”, “errorVerbose”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-09-08T23:13:09.981Z ERROR contact:service ping satellite failed {“Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “attempts”: 1, “error”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout”, “errorVerbose”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-09-08T23:13:10.298Z ERROR contact:service ping satellite failed {“Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “attempts”: 1, “error”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout”, “errorVerbose”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-09-08T23:13:13.083Z INFO piecestore upload started {“Piece ID”: “PGWU43MPWX7I765PRHZ66ZBZIIAQET5CWVLNSMOAAA4R7CMHGKQA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 2980381132544}
2021-09-08T23:13:13.256Z INFO piecestore uploaded {“Piece ID”: “PGWU43MPWX7I765PRHZ66ZBZIIAQET5CWVLNSMOAAA4R7CMHGKQA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 32256}
2021-09-08T23:13:13.715Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S sending {“count”: 689}
2021-09-08T23:13:13.716Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs sending {“count”: 11}
2021-09-08T23:13:13.716Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE sending {“count”: 2}
2021-09-08T23:13:13.716Z INFO orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo sending {“count”: 2}
2021-09-08T23:13:13.716Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB sending {“count”: 1}
2021-09-08T23:13:13.716Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 sending {“count”: 7}
2021-09-08T23:13:13.848Z INFO piecestore uploaded {“Piece ID”: “3TEGAICPWHHDZZE25H2SKL4URBZKHBXHHNKVUPS2DLQYG7GNRJ5A”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 35328}
2021-09-08T23:13:13.973Z INFO orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo finished
2021-09-08T23:13:14.040Z INFO orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE finished
2021-09-08T23:13:14.237Z INFO orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB finished
2021-09-08T23:13:14.481Z INFO orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs finished
2021-09-08T23:13:14.481Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2021-09-08T23:13:15.735Z INFO orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S finished
2021-09-08T23:13:22.966Z INFO piecestore upload started {“Piece ID”: “E7EC5DBMA7FTUSLSEJOMDB7AVDAVE2JRYFZBO5QWUXZQZMSIPFKA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 2980381063936}
2021-09-08T23:13:23.161Z INFO piecestore uploaded {“Piece ID”: “E7EC5DBMA7FTUSLSEJOMDB7AVDAVE2JRYFZBO5QWUXZQZMSIPFKA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 16896}
2021-09-08T23:13:27.837Z INFO piecestore upload started {“Piece ID”: “MCQQ65JWLLLB43TKBZRYK2NGWIB5XS6RFLSLZ4XGDEZULKF4TK2Q”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Available Space”: 2980381046528}
2021-09-08T23:13:28.227Z INFO piecestore uploaded {“Piece ID”: “MCQQ65JWLLLB43TKBZRYK2NGWIB5XS6RFLSLZ4XGDEZULKF4TK2Q”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “PUT”, “Size”: 23808}
2021-09-08T23:13:31.388Z ERROR contact:service ping satellite failed {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 2, “error”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout”, “errorVerbose”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2021-09-08T23:13:31.538Z ERROR contact:service ping satellite failed {“Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “attempts”: 2, “error”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout”, “errorVerbose”: “ping satellite: failed to dial storage node (ID: 12cPj7Fu1bGiNxzjpkZTHedY9HRUuaDYRBFuzJF1Q7WiJeLAvJr) at address salmanstorj.ddns.net:28967: rpc: dial tcp 99.246.231.253:28967: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:141\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

That log shows at least 4 satellites unable to connect, the initial post shows you had issues with a 5th as well. Additionally, clearly uptimerobot is having the same issues connecting with you. This isn’t a satellite specific thing. It’s a “your connection” thing.

You could try resetting the router, but please make note of any changes to settings you made, specifically the port forwards. If you don’t want to go that far just yet, try just restarting it first. If that resolves the issue for a while, but then it comes back, that’s a pretty good signal that your router can’t keep up.

1 Like

I replaced my router and these errors went away. Thanks everyone. My audit score is poor for eu1.storj.io its down to 0, as now my node seem to work fine, I am assuming this score will improve. Do I need to be concerned?

1 Like

It should recover after 30 days online.
Each downtime event will extend recovery period to another 30 days online.

I started the node 20 days ago. I still see traffic so i am not sure if it is suspended or not. If it is should I just create a new node? I start 20 days ago

Please just keep it online for the next 30 days, the online score should grow over 60% and it will went out of suspension mode.
The suspension means that your node would not have an ingress. If you are seeing the ingress traffic - then the node is not suspended anymore.

I am getting both egress and ingress traffic. My score will improve quickly as my node is not even one month old and have only received few audits.

I hope this all hassel is worth the effort. For 16tb on a 1gb connection if I am not getting atleast $100 monthly after 12 hold off period this node is going offline.

Can someone confirm that?

You can calculate with $2.50 - $4.00 per TB stored.

So even with max potential its $64 for storage. Dont you get paid for engress as well. Would all the other factors take it to $100?

In that estimate the egress is included. It’s a total of storage + egress per TB stored.

Your node stores more or less real customer data. How much you can make depends on the customers usage pattern. You may be lucky and store data that gets frequently downloaded/streamed (like this). But you also may be ‘unlucky’ and store parts of cold backups that get never downloaded. We recently heard from a real lucky SNO: High egress on small node vs larger ones

I think this is now a fairly low estimate. Average at the moment is around $3.80, but may actually be more on newer nodes as newly uploaded data is more frequently downloaded than older data. $3.00 - $5.00 seems to be a better estimate at this point.

So this means in order to make $100 plus you need to have 30tb or more storage.

And it will take an year to materialize this.

Basically yes.

I don’t know what the current calculations are but you would probably not fill 8 TB in a year…
@BrightSilence

Filling 8TB will take roughly 2 years at current network behavior. Though of course that might change over the course of those 2 years. At the moment you won’t fill up 30TB ever. As when you reach a little over 20TB deletes of data will roughly match ingress.

More info as always here: Realistic earnings estimator

I am not sure whats the value of these estimates as I dont know how long it will take to fill up the amount of space I put in the estimate. I think I need to take a different approach in this setup and forget about it for an year then we will see. In order to do this successfully I need to setup proper monitoring and alerts.

Has anyone here have setup monitoring and alerts on stoj running on QNAP as a manual setup not through storj app? I am looking for following to be setup

  1. configurable n number of errors in a certain time in the container logs trigger an alert.
  2. node down alert
  3. automatic deployment of new version. Unfortunately watchtower docker container doesnt work on QNAP. How can I automate this process.
  4. in case of disk failure how can I recover quickly. its a single disk not on raid.

anything else I need to setup? I just want to be a passive node operator who just look at the dashboard once in a while.