Your Storage Node on the us-central-1 satellite has been disqualified and can no longer host data on the network

Please search for failed audits first

Definitely you have had 40 consecutive failed audits, otherwise the audit score was greater.

Not exactly- 40 consecutive failed audits is sufficient to be disqualified, but that’s just the fastest way. They don’t have to be consecutive. You can also be disqualified by, e.g., failing 100 audits with lots of passing audits in between.

4 Likes

Any failure of more than 4% of audits over time would lead to suspension/disqualification. The higher the percentage that fail the fewer audits are needed. 40 at 100% failures is the fastest way to get there. At 4% failures it would take about 3000 audits.

2 Likes

Hello,

I send you this topic because I received message and email for :
Your node has been disqualified on Tue, 23 Aug 2022 17:21:31 GMT. If you have any questions regarding this please check our Node Operators thread on Storj forum.
and same on Email .

But when I check this Sattelite I have :
Suspension Score
100 %

Audit Score
95.69 %

Online Score
95.82 %

Is-it a bug ?

Hello @Aniel ,
Welcome to the forum!

see

thanks for information, I ran “docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep failed -c” but no error.

May be it’s due to crash of internet from Internet Service Provider ISP all the day on 23th.

@Aniel
What date does the command show?

docker logs storagenode 2>&1 | head -n 2

And how many months is your SN?

Node was started 2019-11-04. This node is windows gui can you give command with correct syntax for Windows?

Sorry for late replay. There are lot of errors i did know why.

How to repair it?

Unfortunately it seems you lost data, since there are lots of file does not exist errors in your logs for GET_REPAIR traffic. The disqualification seems justified to me. I know it sucks, but there is not a lot you can do. You can keep the node running on other satellites if those are fine still. I do recommend checking your file system for errors. But the disqualification on this satellite is unfortunately permanent.

1 Like

It looks like the problem is with us1.storj.io:7777 only.
There is the same here - very strange:


IMO it is not possible to be online >=99.5% with all satellites but just one, is it?

Hello @jacek ,
Welcome to the forum!

As you can see, the audit score is dropping on other satellites too. So unfortunately your node is managed to lost data and it could be disqualified on other satellites too.

But I have this online :
:~ $ docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep started -c
0
:~ $ docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep downloaded -c
0
:~ $ docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep failed -c
0

You need to have logs before your re-created the container. If you didn’t redirect them to the file, you likely doesn’t have logs with errors.

I changed logs on config.yaml
That’s right = 36

But I don’t know how to repair them. I checked them but all of them are like :
2022-08-29T03:03:32.861Z ERROR piecestore download failed {“Process”: “storagenode”, “Piece ID”: “XXXX”, “Satellite ID”: “XXXX”, “Action”: “GET_REPAIR”, “error”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:546\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52”}

I checked hdd and it’s “ok” :
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
14T 5,4T 8,2T 40% /home/pi/StorjStorage_12To

i believe this means your node has accepted this file in the past… but now it seems to be missing as its attempting to get the ‘repair’ piece of data.

how many GET_REPAIR tries are there going to be for the same piece?

is there any was to restore this piece?

A repair of the same piece becomes less likely every time it is repaired. Because the pieces on reliable nodes stick around and those on less reliable nodes get moved to different nodes. So over time, more of the segment is stored on more reliable nodes.

1 Like

August 2019, no problem before this one. and now 0 traffic. It’s look like removing old node or closing system.