How i can know why i got DQ?

Hello.

I today got DQ on 1 sattelite.

How can i know why? may be it is somehow corelated with migration?

Checked HDD, no errors.

I see audit score gone low.

I have error log level, but i do not see any errors related.

You can create a ticket and give your node id so Alexey can check it internally and give you the reason for disqualification.

Good point thank you.

1 Like

you grepped the log for AUDIT (uppercase) right?

there’s where I saw my successes and failures when my node was disqualified.

Now i found 3tk hashstore: file does not exist

2025-08-13T11:45:32+03:00 ERROR piecestore download failed {“Piece ID”: “K3XWJRTC6FVLCCBMGCNGWZPM74NQSERRVXRCJNIJLZGS6WUB7FFQ”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET_AUDIT”, “Offset”: 943104, “Size”: 256, “Remote Address”: “82.223.82.227:13164”, “error”: “hashstore: file does not exist”, “errorVerbose”: “hashstore: file does not exist\n\tstorj.io/storj/storagenode/hashstore.(*DB).Read:375\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).Reader:299\n\tstorj.io/storj/storagenode/piecestore.(*MigratingBackend).Reader:180\n\tstorj.io/storj/storagenode/piecestore.(*TestingBackend).Reader:105\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:690\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35”}

Also found 2025-08-28T19:13:57+03:00 ERROR piecestore could not get hash and order limit {“Piece ID”: “73Z5VAU7KXUACV6JORBZWBCXFFXO5Z5SBBQI25ZJT54M4YBSM6OA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_REPAIR”, “Offset”: 0, “Size”: 4096, “Remote Address”: “82.223.82.227:19383”, “error”: “footer length field too large: 64104 > 512”, “errorVerbose”: “footer length field too large: 64104 > 512\n\tstorj.io/storj/storagenode/piecestore.(*hashStoreReader).GetPieceHeader:392\n\tstorj.io/storj/storagenode/piecestore.pieceHashAndOrderLimitFromReader:16\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:711\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35”}

2025-08-28T19:13:57+03:00 ERROR piecestore download failed {“Piece ID”: “73Z5VAU7KXUACV6JORBZWBCXFFXO5Z5SBBQI25ZJT54M4YBSM6OA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_REPAIR”, “Offset”: 0, “Size”: 4096, “Remote Address”: “82.223.82.227:19383”, “error”: “footer length field too large: 64104 > 512”, “errorVerbose”: “footer length field too large: 64104 > 512\n\tstorj.io/storj/storagenode/piecestore.(*hashStoreReader).GetPieceHeader:392\n\tstorj.io/storj/storagenode/piecestore.pieceHashAndOrderLimitFromReader:16\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:711\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35”}

All AUDIT problems I found on US1 but DQ is on EU1

Usually means a file was actually lost - I have a few of these on nodes I know have had previous hard poweroutages.

I don’t have a single one in my logs for the past month, and my guess would be that the file is actually corrupted. It sounds like the node rejected the read piece due to wrong length of header.

Could perhaps indicate a faulty filesystem?

I chakef FS it found no errors, also all this rows was about US1 not EU1

Yes, I agree it’s strange. Especially the fact that you don’t see any AUDIT errors :face_with_raised_eyebrow:

However, it does not rule out that the rust drive could be faulty and producing corrupted files..

So notice how the error is generated by piecesstore? Alexey explained that the pieces store will check the hashstore if it can’t find it in normal blobs, so it just means missing data.

However it is weird that your DQ was in EU not US.

This is a lost piece, it has not been found neither in piecestore nor hashstore (the check in hashstore is a last one).

these are corrupted pieces

I would suggest to check your data:

And here is another one: https://review.dev.storj.tools/c/storj/storj/+/11772

git clone git@github.com:storj/storj.git && cd storj
go install ./cmd/is-valid-sj1-blob

or using the similar method with a docker:

It’s not a good method. The result will be “lost or corrupted pieces” (if audit score is 0.96 or less), not more other details on the auditors’ side. Also it requires manual work from the support and dev teams.
Please do not suggest it in the next time, I would appreciate that.

The better method would be to check yourself:

  1. Why is my node disqualified? - Storj Docs
1 Like

All lost or corrupted pieces logs was about US1, but DQ is on EU1.

I realy not understand how to elaborate doker check method to windows GUI

Ik I just started this storagenode-checksum in node directory, looks like working.

1 Like

You may have corrupted pieces on EU1 as well, but they are not logged so obviously.

will this app fix something or it only check errors?

is it for piecestore or for hashstore?

It’s only for piecestore and only check. If something cannot be migrated to hashstore - it will remain in piecestore.

I do not think that we have something similar for hashstore except recovery of hashtables. But it also will recover only good data, the bad data will remain there and likely will be considered as a trash (because it will not have an index in hashtables) and will be removed sooner or later.