Your node was disqualified

Hi,

I got an email saying my node was disqualified but it was online and working…

I don’t get it.

Any help on bringing it back online? I’ve been a node for more than a year. I don’t understand…

Hi @hugosbnarciso
What does the log file show? What does the dashboard show? Which satellite is the disqualification for?

Hi,

Saltlake i believe.

picture attached

Yes, that’s saltlake. You can check the log file for any errors related to that satellite.

Also you can use the node API to share the relevant audit detail:

http://localhost:14002/api/sno/satellite/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE

Hi, thanks for the help!

Here’s what’s in that link:

{“id”:“1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”,“storageDaily”:[{“atRestTotal”:5130170658606.723,“intervalStart”:“2022-05-01T00:00:00Z”},{“atRestTotal”:5089126136872.727,“intervalStart”:“2022-05-02T00:00:00Z”},{“atRestTotal”:5260321483232.459,“intervalStart”:“2022-05-03T00:00:00Z”},{“atRestTotal”:6293331467400.902,“intervalStart”:“2022-05-04T00:00:00Z”},{“atRestTotal”:5176217136792.229,“intervalStart”:“2022-05-05T00:00:00Z”},{“atRestTotal”:5073706998539.303,“intervalStart”:“2022-05-06T00:00:00Z”},{“atRestTotal”:5134166938755.259,“intervalStart”:“2022-05-07T00:00:00Z”},{“atRestTotal”:6523587037834.156,“intervalStart”:“2022-05-08T00:00:00Z”}],“bandwidthDaily”:[{“egress”:{“repair”:277628672,“audit”:5632,“usage”:1439641856},“ingress”:{“repair”:4852390912,“usage”:18085888},“delete”:0,“intervalStart”:“2022-05-01T00:00:00Z”},{“egress”:{“repair”:259349760,“audit”:3840,“usage”:1002538752},“ingress”:{“repair”:5116955904,“usage”:34430464},“delete”:0,“intervalStart”:“2022-05-02T00:00:00Z”},{“egress”:{“repair”:283134976,“audit”:5120,“usage”:1262924288},“ingress”:{“repair”:5347770880,“usage”:21976064},“delete”:0,“intervalStart”:“2022-05-03T00:00:00Z”},{“egress”:{“repair”:364004096,“audit”:1280,“usage”:1146039808},“ingress”:{“repair”:7063998720,“usage”:29228032},“delete”:0,“intervalStart”:“2022-05-04T00:00:00Z”},{“egress”:{“repair”:30273792,“audit”:2816,“usage”:39610368},“ingress”:{“repair”:634236416,“usage”:3366912},“delete”:0,“intervalStart”:“2022-05-05T00:00:00Z”},{“egress”:{“repair”:15739392,“audit”:512,“usage”:20578304},“ingress”:{“repair”:480955648,“usage”:3893504},“delete”:0,“intervalStart”:“2022-05-06T00:00:00Z”},{“egress”:{“repair”:15569920,“audit”:1280,“usage”:20127744},“ingress”:{“repair”:296709376,“usage”:2587392},“delete”:0,“intervalStart”:“2022-05-07T00:00:00Z”},{“egress”:{“repair”:14923520,“audit”:1536,“usage”:18087936},“ingress”:{“repair”:335988736,“usage”:2435328},“delete”:0,“intervalStart”:“2022-05-08T00:00:00Z”},{“egress”:{“repair”:5408256,“audit”:256,“usage”:5308416},“ingress”:{“repair”:100804096,“usage”:262144},“delete”:0,“intervalStart”:“2022-05-09T00:00:00Z”}],“storageSummary”:43680627858033.76,“bandwidthSummary”:30566988544,“egressSummary”:6220912128,“ingressSummary”:24346076416,“currentStorageUsed”:240564762496,“audits”:{“auditScore”:0.5987369392383782,“suspensionScore”:1,“onlineScore”:1,“satelliteName”:“saltlake.tardigrade.io:7777”},“auditHistory”:{“score”:1,“windows”:[{“windowStart”:“2022-04-09T00:00:00Z”,“totalCount”:16,“onlineCount”:16},{“windowStart”:“2022-04-09T12:00:00Z”,“totalCount”:27,“onlineCount”:27},{“windowStart”:“2022-04-10T00:00:00Z”,“totalCount”:26,“onlineCount”:26},{“windowStart”:“2022-04-10T12:00:00Z”,“totalCount”:24,“onlineCount”:24},{“windowStart”:“2022-04-11T00:00:00Z”,“totalCount”:24,“onlineCount”:24},{“windowStart”:“2022-04-11T12:00:00Z”,“totalCount”:29,“onlineCount”:29},{“windowStart”:“2022-04-12T00:00:00Z”,“totalCount”:25,“onlineCount”:25},{“windowStart”:“2022-04-12T12:00:00Z”,“totalCount”:27,“onlineCount”:27},{“windowStart”:“2022-04-13T00:00:00Z”,“totalCount”:32,“onlineCount”:32},{“windowStart”:“2022-04-13T12:00:00Z”,“totalCount”:25,“onlineCount”:25},{“windowStart”:“2022-04-14T00:00:00Z”,“totalCount”:31,“onlineCount”:31},{“windowStart”:“2022-04-14T12:00:00Z”,“totalCount”:30,“onlineCount”:30},{“windowStart”:“2022-04-15T00:00:00Z”,“totalCount”:32,“onlineCount”:32},{“windowStart”:“2022-04-15T12:00:00Z”,“totalCount”:20,“onlineCount”:20},{“windowStart”:“2022-04-16T00:00:00Z”,“totalCount”:27,“onlineCount”:27},{“windowStart”:“2022-04-16T12:00:00Z”,“totalCount”:40,“onlineCount”:40},{“windowStart”:“2022-04-17T00:00:00Z”,“totalCount”:36,“onlineCount”:36},{“windowStart”:“2022-04-17T12:00:00Z”,“totalCount”:21,“onlineCount”:21},{“windowStart”:“2022-04-18T00:00:00Z”,“totalCount”:36,“onlineCount”:36},{“windowStart”:“2022-04-18T12:00:00Z”,“totalCount”:32,“onlineCount”:32},{“windowStart”:“2022-04-19T00:00:00Z”,“totalCount”:58,“onlineCount”:58},{“windowStart”:“2022-04-19T12:00:00Z”,“totalCount”:53,“onlineCount”:53},{“windowStart”:“2022-04-20T00:00:00Z”,“totalCount”:77,“onlineCount”:77},{“windowStart”:“2022-04-20T12:00:00Z”,“totalCount”:58,“onlineCount”:58},{“windowStart”:“2022-04-21T00:00:00Z”,“totalCount”:61,“onlineCount”:61},{“windowStart”:“2022-04-21T12:00:00Z”,“totalCount”:54,“onlineCount”:54},{“windowStart”:“2022-04-22T00:00:00Z”,“totalCount”:58,“onlineCount”:58},{“windowStart”:“2022-04-22T12:00:00Z”,“totalCount”:70,“onlineCount”:70},{“windowStart”:“2022-04-23T00:00:00Z”,“totalCount”:45,“onlineCount”:45},{“windowStart”:“2022-04-23T12:00:00Z”,“totalCount”:60,“onlineCount”:60},{“windowStart”:“2022-04-24T00:00:00Z”,“totalCount”:62,“onlineCount”:62},{“windowStart”:“2022-04-24T12:00:00Z”,“totalCount”:50,“onlineCount”:50},{“windowStart”:“2022-04-25T00:00:00Z”,“totalCount”:38,“onlineCount”:38},{“windowStart”:“2022-04-25T12:00:00Z”,“totalCount”:73,“onlineCount”:73},{“windowStart”:“2022-04-26T00:00:00Z”,“totalCount”:67,“onlineCount”:67},{“windowStart”:“2022-04-26T12:00:00Z”,“totalCount”:72,“onlineCount”:72},{“windowStart”:“2022-04-27T00:00:00Z”,“totalCount”:64,“onlineCount”:64},{“windowStart”:“2022-04-27T12:00:00Z”,“totalCount”:61,“onlineCount”:61},{“windowStart”:“2022-04-28T00:00:00Z”,“totalCount”:75,“onlineCount”:75},{“windowStart”:“2022-04-28T12:00:00Z”,“totalCount”:89,“onlineCount”:89},{“windowStart”:“2022-04-29T00:00:00Z”,“totalCount”:90,“onlineCount”:90},{“windowStart”:“2022-04-29T12:00:00Z”,“totalCount”:76,“onlineCount”:76},{“windowStart”:“2022-04-30T00:00:00Z”,“totalCount”:69,“onlineCount”:69},{“windowStart”:“2022-04-30T12:00:00Z”,“totalCount”:83,“onlineCount”:83},{“windowStart”:“2022-05-01T00:00:00Z”,“totalCount”:102,“onlineCount”:102},{“windowStart”:“2022-05-01T12:00:00Z”,“totalCount”:78,“onlineCount”:78},{“windowStart”:“2022-05-02T00:00:00Z”,“totalCount”:77,“onlineCount”:77},{“windowStart”:“2022-05-02T12:00:00Z”,“totalCount”:69,“onlineCount”:69},{“windowStart”:“2022-05-03T00:00:00Z”,“totalCount”:78,“onlineCount”:78},{“windowStart”:“2022-05-03T12:00:00Z”,“totalCount”:100,“onlineCount”:100},{“windowStart”:“2022-05-04T00:00:00Z”,“totalCount”:85,“onlineCount”:85},{“windowStart”:“2022-05-04T12:00:00Z”,“totalCount”:108,“onlineCount”:108},{“windowStart”:“2022-05-05T00:00:00Z”,“totalCount”:8,“onlineCount”:8},{“windowStart”:“2022-05-05T12:00:00Z”,“totalCount”:2,“onlineCount”:2},{“windowStart”:“2022-05-06T00:00:00Z”,“totalCount”:1,“onlineCount”:1},{“windowStart”:“2022-05-06T12:00:00Z”,“totalCount”:1,“onlineCount”:1},{“windowStart”:“2022-05-07T00:00:00Z”,“totalCount”:1,“onlineCount”:1},{“windowStart”:“2022-05-07T12:00:00Z”,“totalCount”:1,“onlineCount”:1},{“windowStart”:“2022-05-08T00:00:00Z”,“totalCount”:1,“onlineCount”:1},{“windowStart”:“2022-05-08T12:00:00Z”,“totalCount”:1,“onlineCount”:1},{“windowStart”:“2022-05-09T00:00:00Z”,“totalCount”:2,“onlineCount”:1}]},“priceModel”:{“EgressBandwidth”:2000,“RepairBandwidth”:1000,“AuditBandwidth”:1000,“DiskSpace”:150},“nodeJoinedAt”:“2021-03-31T14:43:48.425026Z”}

Something happened around this time. The number of audits drops through the floor. As the node was detected as ‘online’ it means the issue must be data quality, so we need to check the log file for ERROR entries.

also proved by the audit score recorded:

{“auditScore”:0.5987369392383782,“suspensionScore”:1,“onlineScore”:1

1 Like

that’s odd. since the uptime (402h - 16 days) is longer than 4th and 5th May…

and there was no issue at all with the server…

Like I say, uptime and online is not an issue, it’s a data quality failure. The node was returning no data, corrupt data or partial data.

i see… what are my solutions here? start over? What about the other satellites?

H

I would check the log file for ERROR in case you are failing audits for other satellites. The dashboard will show if the audit score is dropping for others. Without checking for further data errors the node could be disqualified on more satellites.

The saltlake satellite is big, but you can successfully run quite a profitable node with only the other satellites.

Edited after @deathlessdd noted my mixup of saltlake with us2!

I beg to differ, saltlake is the main profitable sat.
image

2 Likes

hi everyone,

here are the logs, can someone help decipher them?

i used:

docker logs --tail 20 storagenode
2022-05-10T07:56:05.704Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "VQQOYLIJK2ZEEKBNVAE6AYTGZWY7FLBLQR7OCZ63OFREUXZF5UPA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 149504}
2022-05-10T07:56:07.670Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "2JFBJEY6TQET5ARYBQMSFXTDMOBTB53CPI62PVQONIUNWZ2YFCXA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:07.705Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "PQ3WLYHHP37XHHDW6QYMQOTXTMGQGHXTRLQGOTFNKR7MXGQF5G2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:08.027Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "AUHHWFCHCOPXAKWKY4IEMPSEEAGCXLD7JJJWZKIUFEMQFNORNMKQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 2048}
2022-05-10T07:56:08.064Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "37TS3QVG2XQZ3EZMTI2K55TH54BLKZGQVGDR75HHCXCEM6TEZPLA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:08.104Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "L3FGQEP4VCH4RV5BKOCFNKSFGOZRVF5XE4H7YNOIPUHET367JTDA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:09.548Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "RWLVWZGANBEJ4EBVT3T2QEAE4JWGDZ2OJWATGAPZDEHYOFIC7KOQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:09.973Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "4GFCQCR3ZW4SLMIL7PITV6P7A5EX3LKAMT7TNES2TEX4JGUKWADQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:10.199Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "FNFCJZ2PPE72T2FQ4UUG3TQEH5U5NZFO6FHBIK5PD7J5VPPBHVQQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 145408}
2022-05-10T07:56:10.627Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "KRRH6TIHY4AYLZ2H7TAQNL2DA7BN3YMIOF62OBU7K6VUB5MMGT6Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:11.703Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "6UCLXHTMV5VMITGFMTOUAZEHUHLWRVENJHHTXHSK6SE6ESS45BIA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:17.337Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "VZODU5U2XVL6MQ3PXIEBLASWUW2Y2SSD4TEXOTSSRL65B2AJXPYA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:18.138Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "DKHBOW5JSJPA5AF4OAFKFYRKZ2EZGP2XE3LQN5G2LBB64BDQAZGQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:18.182Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "SA7OTAOS2DABOID4X4HVDYC23VNLJQT5JDTQKGB77Z6HJVEKGLOA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:18.224Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "4PPZAECFPEDT5AEZI33OC55AYTCL7K4KXCZVZMK4RQF5ZNK7QWXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:18.262Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "Z4IJQQJ5BBWVNQHSCRJM5S432OP4T54OT3HJPCJVIEXPHJZRXFNQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:18.312Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "52GO3B4WKRSX3E266UAMC3I2RLSF7PQJL7O3OBA2DX23FSXEWNYA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:18.369Z	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "253PMUBYSIWT52RVITEXEBZMKDZ5VOGBTYC7SIZBVOF3Z6UZRCEA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 3644979661824}
2022-05-10T07:56:19.110Z	ERROR	piecestore	failed to add bandwidth usage	{"Process": "storagenode", "error": "bandwidthdb: database is locked", "errorVerbose": "bandwidthdb: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*bandwidthDB).Add:60\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1:723\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:435\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}
2022-05-10T07:56:19.111Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "XPZ5SZ5OM24OXLXQDRW2TSEKYWKEYQJ7FWAKMYAM5KXLK72P77WQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}

Upload of the piece is started and finished. These are normal INFOrmational messages (INFO).

This one is indication that your disk is too busy to add a bandwidth usage to the database - the previous operation with this database is not finished yet.
How is your disk connected?
What’s the filesystem and is it a RAID?
Is it a network attached disk?

hey Alexey, thanks for the help,

Disk is connected to an UBUNTU server via USB. nothing else is connected to that USB controller.

now i just got the message i’m disqualified on Europe node…

Thanks

This is unfortunate case. Looks like a big damage or files loss/corruption on your disk, or it’s not readable.
Please search for errors related to GET_AUDIT and GET_REPAIR: https://support.storj.io/hc/en-us/articles/4403035941780-Why-is-my-node-disqualified-

thanks Alexey, i’ll look into it. Do you think i’ll be able to get the nodes qualified again?
or it’s a loss?

H

no errors related to GET_AUDIT and GET_REPAIR when i do:

docker logs storagenode 2>&1 | grep -E "GET_AUDIT|GET_REPAIR" | grep failed

The disqualification is permanent and not reversible, so if the audit score is below 60%, then it will not recover.

If you re-created a container and have not redirected logs to the file, the previous logs are deleted with the container, so we cannot see a previous errors…

i did not re-create anything… it’s just empty.

this is all very odd. Nothing changed on my system (i wasn’t even in the country when this supposedly happen)…

what are my solutions here?

The node update by watchtower will re-create a container, if the base image is changed.
You can see that with docker logs watchtower
If they are empty, you can also use this command:

docker events --since 2022-04-20T00:00:00Z

There is no solution for disqualification unfortunately, if the audit score below 60% it’s permanent.
You can still run this node with remained satellites, if the node is not disqualified on all satellites.

If it’s disqualified on all satellites, then you can only start with a new generated identity, new authorization token and clean storage.
If you specify the same wallet in your new node, it could help to reach the Minimum Payout Threshold.
I would like to suggest to backup your current identity (even if it’s disqualified on all satellites) on case if you change your mind and decide to opt-in for zkSync or Polygon to get your undistributed payout before reach of the Minimum Payout Threshold.