Your Storage Node on the us-central-1 satellite has been disqualified and can no longer host data on the network

Hello @Aniel ,
Welcome to the forum!

see

thanks for information, I ran “docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep failed -c” but no error.

May be it’s due to crash of internet from Internet Service Provider ISP all the day on 23th.

@Aniel
What date does the command show?

docker logs storagenode 2>&1 | head -n 2

And how many months is your SN?

Node was started 2019-11-04. This node is windows gui can you give command with correct syntax for Windows?

Sorry for late replay. There are lot of errors i did know why.

How to repair it?

Unfortunately it seems you lost data, since there are lots of file does not exist errors in your logs for GET_REPAIR traffic. The disqualification seems justified to me. I know it sucks, but there is not a lot you can do. You can keep the node running on other satellites if those are fine still. I do recommend checking your file system for errors. But the disqualification on this satellite is unfortunately permanent.

1 Like

It looks like the problem is with us1.storj.io:7777 only.
There is the same here - very strange:


IMO it is not possible to be online >=99.5% with all satellites but just one, is it?

Hello @jacek ,
Welcome to the forum!

As you can see, the audit score is dropping on other satellites too. So unfortunately your node is managed to lost data and it could be disqualified on other satellites too.

But I have this online :
:~ $ docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep started -c
0
:~ $ docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep downloaded -c
0
:~ $ docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep failed -c
0

You need to have logs before your re-created the container. If you didn’t redirect them to the file, you likely doesn’t have logs with errors.

I changed logs on config.yaml
That’s right = 36

But I don’t know how to repair them. I checked them but all of them are like :
2022-08-29T03:03:32.861Z ERROR piecestore download failed {“Process”: “storagenode”, “Piece ID”: “XXXX”, “Satellite ID”: “XXXX”, “Action”: “GET_REPAIR”, “error”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:546\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52”}

I checked hdd and it’s “ok” :
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
14T 5,4T 8,2T 40% /home/pi/StorjStorage_12To

i believe this means your node has accepted this file in the past
 but now it seems to be missing as its attempting to get the ‘repair’ piece of data.

how many GET_REPAIR tries are there going to be for the same piece?

is there any was to restore this piece?

A repair of the same piece becomes less likely every time it is repaired. Because the pieces on reliable nodes stick around and those on less reliable nodes get moved to different nodes. So over time, more of the segment is stored on more reliable nodes.

1 Like

August 2019, no problem before this one. and now 0 traffic. It’s look like removing old node or closing system.

means the data loss. Before the change in the audit initial values (which were reset for everyone) your node may survive because the threshold was lower - 60%.

But in this case we can’t do anything. So I must turn of and create a new one. Deleting a 3years old node 


Yeah i got the same for us1 about a week ago. :frowning:

If your node have not disqualified on all satellites, you may continue to run your node for remained satellites.

Hello, thanks all for your help.

This morning 
 no problem anymore : Audit / 
 100% 
 .
(May be storj teams detected a bug so they reset all status on this node)

1 Like