Your node was disqualified

thank you.

Is there a way to keep the other satellites and renew this 2 satellites or i have to start over everything again with all the satellites?

i am opted in for zkSync

Try to figure out - why your node failed audits. If this is a permission issue - try to fix it. If everything looks right (no errors for GET_AUDIT and GET_REPAIR), then it should just work.
The satellites disqualified your node will not renew it if the audit score below 60%, they will not trust your node anymore. The remained satellites will work with your node and pay you while audit score is above 60%
However it’s up on you - leave it as is or start from scratch.

what’s ur recommendation?

I’ve been online since March 2021 and other nodes look good.

The disqualification is happened for failed audits, not downtime.
Try to search in the current logs for errors: https://support.storj.io/hc/en-us/articles/4403035941780-Why-is-my-node-disqualified-
And they doesn’t looks good - the US2, US1 and EU1 satellites have failed audits as well. So seems data loss is happened or maybe hardware issue (partially hanging).

Perhaps it’s worth to stop and remove the container, then check your disk for errors and fix them. After that you can run your node back. If data is available - the audit score should recover. If it’s not, depending on percentage of loss it will fluctuate but could never been 100% again.

Hi Alexey,

My name is Samit. One of my nodes is disqualified. Below is the last log I get from docker,

Please assist me to resolve this issue.

Hi @samitjaiswal,
It’s not possible to recover from disqualification. Please can you check the log file for the same Piece ID’s to see if there is any more information to diagnose the issue…

1 Like

Any specific command to do that? or just “docker logs storagenode”

You can filter logs:

docker logs storagenode 2>&1 | grep "put-here-piece-id" 

Replace put-here-piece-id with the PieceID from your screenshot.

P.S. for logs it’s better to post the text from the log instead of screenshot of the text :slight_smile:

Actually, below is the latest description of the piece. I did not interrupt the process ever. I have dedicated hardware for the node, yet, one of my satellites was disqualified.
Please assist with this.

$docker logs storagenode 2>&1 | grep NM7I3AOFX3E7NVG6M5HDTU56GGA34E6UOQRIE3255AUFRRKILAUQ
2022-05-26T18:15:29.597Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "NM7I3AOFX3E7NVG6M5HDTU56GGA34E6UOQRIE3255AUFRRKILAUQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-05-26T18:50:09.384Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "NM7I3AOFX3E7NVG6M5HDTU56GGA34E6UOQRIE3255AUFRRKILAUQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "error": "use of closed network connection", "errorVerbose": "use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:352\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:620\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}

With another piece I get below issue.


$docker logs storagenode 2>&1 | grep KXABXN4YIPUGZDFMIWY63B5FPHTGIQTTLPKULTNMW2UJWSIHGU3Q
2022-05-26T18:08:27.813Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "KXABXN4YIPUGZDFMIWY63B5FPHTGIQTTLPKULTNMW2UJWSIHGU3Q", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-05-26T18:45:09.939Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "KXABXN4YIPUGZDFMIWY63B5FPHTGIQTTLPKULTNMW2UJWSIHGU3Q", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->167.235.21.9:56486: write: broken pipe", "errorVerbose": "write tcp 172.17.0.3:28967->167.235.21.9:56486: write: broken pipe\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:352\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:620\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}

As you can see - the node was not able to provide a piece even after 35 minutes. The default timeout is 5 minutes, and three retries for each piece. If the piece were not provided anyway - the audit considered as failed. Too many failed audits and node become disqualified.

This could be a problem with the disk - the piece either not readable, or your disk was too busy with something other.
The typical way to fall to that problem is to use a foreign filesystem for the OS. For example - when you use NTFS in Linux or ext4 in Windows. They are not working with a normal speed, sometimes get stuck in reading or writing, or even break data.
The other way is to use network filesystems like NFS or SMB (CIFS) - they are not supported for a reason.