You can already derive that. Only audits and get repair impact scores. And the only error you see that impacts the audit score is file does not exist. Everything else impacts suspension scores.
But it isn’t complete, if you deliver bad data, the transfer finishes just fine, but the failure will impact your audit score without an error on the node side. If your node has trouble responding you may not see any logs at all, but depending on what happens it can hit your suspension or audit score.
How to dig deeper and dismantle that problem?
I have got a really big pain within every 2-month-window, so trying to figure out to solve this case.
For now I have about 1600+ error messages in logs and most of them (about 95%) unable to delete piece... "pieces error: filestore error: file does not exist"
that mentioned 2 missing files since 6th of May on a node5.
Are you saying it’s the same 2 pieces over and over again?
If that’s the case the impact may be limited, but that does suggest your node lost data. Have a look at this post to work around repeating errors like that.
Please move/remove this if its not in the right place, not trying to hijack.
I have been getting “piecestore download failed” spamming my logs for a long time. The reputation score is great.
Don’t know if its effecting my node, but i don’t see my other nodes with this error spamming the logs. Cant seem to track down anything in the forums that offer a solution other then “ignore it”.
“ERROR piecestore download failed {“Process”: “storagenode”, “Piece ID”: “{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}”, “Satellite ID”: “{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}”, “Action”: “GET”, “Offset”: 0, “Size”: 8192, “Remote Address”: “{xxx.xxx.xxx.xxx}”, “error”: “hashstore: file does not exist”, “errorVerbose”: “hashstore: file does not exist\n\tstorj.io/storj/storagenode/hashstore.(*DB).Read:359\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).Reader:298\n\tstorj.io/storj/storagenode/piecestore.(*MigratingBackend).Reader:180\n\tstorj.io/storj/storagenode/piecestore.(*TestingBackend).Reader:105\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:676\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35”}”
There is no solution/workaround for the lost piece. And if the auditor or repairer or the customer would require it, they would ask for it from your node (since your node one of the segment holders), of course the libuplink would use other remained pieces to reconstruct the segment. And while the segment health is higher than a repair threshold, the satellite would do nothing.
When this segment would require a repair, then maybe the pointer to your node would be removed at the end.
So that explains it spamming the log every 7 to 10 entries? Wouldn’t it only show in the logs when that piece is asked for? Although, sometimes it is the same is “Piece ID” repeated a only couple of times, but mostly its many different piece IDs constantly spamming the log. I would think so many pieces that “does not exist”, on my node, would drastically hurt my reputation score over the last year or so of this happening!?!?!?! But its not!
I have also read on these forums that this error may be a result of customers canceling downloads, or my node losing the race over faster internet connections (this node is on a basic home internet connection with only a 10Mb/s upload; my other node, that does NOT have these errors, is on a business internet with 500Mb/s upload). Is there any truth to any of that? It would explain why its constantly happening, for so many months at my home lab!!
Is there anything i can run that will scrub my node for missing pieces once and for all? Or just live with/ignore it?
I think only if this error happens on audit or get_repair requests.
Normal downloads do not affect the score.
I am not sure but I think we had this error in the past also when a file was already deleted like by expiry or something but still getting requested.