118UWp satellite seems to be down

Not sure if this is expected but it looks like 118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW is down?
Can somebody confirm please?
I noticed this as traffic dropped to ~0 on my SN.

I feel that new code is coming on this satellite… :laughing:

I think this satellite is updating right now.

Same here…

Receiving the same errors:

ERROR orders.118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW failed to settle orders {“error”: "order: unable to connect to the satellite: rpccompat: connection error: desc = "transport: error while dialing: dial tcp 78.94.240.189:7777

Same errors since 14:30 CEST
The HDD’s are so quiet :zzz: :shushing_face:

Same here, no connection to satellite

you get signal reports port 28967 open
logs, 2019-10-05T08:25:02.996-0400 INFO Configuration loaded from: C:\Program Files\Storj\Storage Node\config.yaml

2019-10-05T08:25:03.017-0400 INFO Operator email: jgb211@icloud.com

2019-10-05T08:25:03.017-0400 INFO operator wallet: 0xCAEdd81ae6bd22C4Fb3943c7C29D52CDDC31caA2

2019-10-05T08:25:03.379-0400 INFO version running on version v0.22.0

2019-10-05T08:25:03.451-0400 INFO db.migration.0 Initial setup

2019-10-05T08:25:03.452-0400 INFO db.migration.1 Network Wipe #2

2019-10-05T08:25:03.453-0400 INFO db.migration.2 Add tracking of deletion failures.

2019-10-05T08:25:03.453-0400 INFO db.migration.3 Add vouchersDB for storing and retrieving vouchers.

2019-10-05T08:25:03.453-0400 INFO db.migration.4 Add index on pieceinfo expireation

2019-10-05T08:25:03.454-0400 INFO db.migration.5 Partial Network Wipe - Tardigrade Satellites

2019-10-05T08:25:03.454-0400 INFO db.migration.6 Add creation date.

2019-10-05T08:25:03.454-0400 INFO db.migration.7 Drop certificate table.

2019-10-05T08:25:03.455-0400 INFO db.migration.8 Drop old used serials and remove pieceinfo_deletion_failed index.

2019-10-05T08:25:03.455-0400 INFO db.migration.9 Add order limit table.

2019-10-05T08:25:03.456-0400 INFO db.migration.10 Optimize index usage.

2019-10-05T08:25:03.456-0400 INFO db.migration.11 Create bandwidth_usage_rollup table.

2019-10-05T08:25:03.456-0400 INFO db.migration.12 Clear Tables from Alpha data

2019-10-05T08:25:03.458-0400 INFO db.migration.13 Free Storagenodes from trash data

2019-10-05T08:25:03.458-0400 INFO db.migration.14 Free Storagenodes from orphaned tmp data

2019-10-05T08:25:03.458-0400 INFO db.migration.15 Start piece_expirations table, deprecate pieceinfo table

2019-10-05T08:25:03.459-0400 INFO db.migration.16 Add reputation and storage usage cache tables

2019-10-05T08:25:03.459-0400 INFO db.migration.17 Create piece_space_used table

2019-10-05T08:25:03.460-0400 INFO db.migration.18 Drop vouchers table

2019-10-05T08:25:03.460-0400 INFO db.migration.19 Add disqualified field to reputation

2019-10-05T08:25:03.461-0400 INFO db.migration.20 Empty storage_usage table, rename storage_usage.timestamp to interval_start

2019-10-05T08:25:03.461-0400 INFO db.migration.21 Create satellites table and satellites_exit_progress table

2019-10-05T08:25:03.462-0400 INFO db.migration.22 Vacuum info db

2019-10-05T08:25:03.474-0400 INFO db.migration.23 Split into multiple sqlite databases

2019-10-05T08:25:04.512-0400 INFO db.migration.24 Drop unneeded tables in deprecatedInfoDB

2019-10-05T08:25:04.580-0400 INFO db.migration.25 Remove address from satellites table

2019-10-05T08:25:04.581-0400 INFO db.migration Database Version {“version”: 25}

2019-10-05T08:25:04.582-0400 INFO contact:chore Storagenode contact chore starting up

2019-10-05T08:25:04.582-0400 INFO bandwidth Performing bandwidth usage rollups

2019-10-05T08:25:04.582-0400 INFO Node 12asLxvZnzfxreUNCQ8acTBUAC5k5Aour4162tdY8m5hZanhUN2 started

2019-10-05T08:25:04.582-0400 INFO Public server started on [::]:28967

2019-10-05T08:25:04.582-0400 INFO Private server started on 127.0.0.1:7778

2019-10-05T08:25:04.594-0400 INFO piecestore:monitor Remaining Bandwidth {“bytes”: 2000000000000}

2019-10-05T08:25:04.646-0400 INFO version running on version v0.22.0

Sent from Mail for Windows 10
Dashboard has gone from offline to last contact 20 minutes and the following error I the logs
2019-10-05T09:00:22.866-0400 ERROR contact:chore pingSatellites failed {“error”: “rpccompat: connection error: desc = “transport: error while dialing: dial tcp 78.94.240.189:7777: connectex: No connection could be made because the target machine actively refused it.””, “errorVerbose”: “rpccompat: connection error: desc = “transport: error while dialing: dial tcp 78.94.240.189:7777: connectex: No connection could be made because the target machine actively refused it.”\n[tstorj.io/storj/pkg/rpc.Dialer.dial:29\n\tstorj.io/storj/pkg/rpc.Dialer.DialAddressID:101\n\tstorj.io/storj/storagenode/contact](http://tstorj.io/storj/pkg/rpc.Dialer.dial:29/n/tstorj.io/storj/pkg/rpc.Dialer.DialAddressID:101/n/tstorj.io/storj/storagenode/contact).(*Chore).pingSatellites.func1:78\n[tgolang.org/x/sync/errgroup](http://tgolang.org/x/sync/errgroup).(*Group).Go.func1:57”}

Sent from Mail for Windows 10

Perhaps it’s related to the

it appears to be down for me too could not ping address

well looks like the weekend plans got changed for S-
b:upside_down_face:

1 Like

seems we’re back again :wink:

You may want to remove the email address and wallet identification from your post…

My node still tells me that the connection to this satellite is down. The rest works just fine.

Confirmed. The satellite crashed because of too many open connections. It looks like the repair service is causing it. Developers are looking into it. Meanwhile we might have to restart the satellite a few more times.

2 Likes

Yeah confirm same. No traffic from this node (which is responsible for 99.9% of my traffic).

2 Likes

still no traffic… any new news?

Traffic generated by people, not computers. Please, keep your node online.

3 Likes

Fixed and deployed. Issue solved.

5 Likes

Everything is back to normal. Thanks guys!