ERROR piecestore download failed *.sj1: no such file or directory

You can already derive that. Only audits and get repair impact scores. And the only error you see that impacts the audit score is file does not exist. Everything else impacts suspension scores.

But it isn’t complete, if you deliver bad data, the transfer finishes just fine, but the failure will impact your audit score without an error on the node side. If your node has trouble responding you may not see any logs at all, but depending on what happens it can hit your suspension or audit score.

1 Like

How to dig deeper and dismantle that problem?
I have got a really big pain within every 2-month-window, so trying to figure out to solve this case.

For now I have about 1600+ error messages in logs and most of them (about 95%)
unable to delete piece... "pieces error: filestore error: file does not exist"
that mentioned 2 missing files since 6th of May on a node5.

The other 5% is:
piecestore download failed ... "Action": "GET", "error": "pieces error
piecestore upload failed ... "Action": "PUT", "error": "unexpected EOF"
pieces:trash emptying trash failed ... "pieces error: filestore error: context canceled

I’m just aware to moving on Debian from Windows, cause of lack of my skills in this environment.

Are you saying it’s the same 2 pieces over and over again?
If that’s the case the impact may be limited, but that does suggest your node lost data. Have a look at this post to work around repeating errors like that.

Yep, the same cycling errors with 2 missing files query (the operation repeats every hour).
Thanks, try to look for.

Upd:
It seems that I have driven the cleaning with that node (node5) about 2 month ago.

Thank you for an information, I will try to make it again.

It seems that I had replaced this one previously but it appeared again:

2022-05-06T07:42:06.121Z	ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Looks like the Node (node5) stopped slipping:

2022-05-11T08:34:06.924Z	INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
2022-05-11T08:34:07.003Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ"}
2022-05-11T08:34:07.023Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "Z5E4GCMMNCJMLYP6DYPA26HE76NRT73MA4UMS5UO5ZEUQSHKG77A"}
2022-05-11T08:34:07.140Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QFMSS774W32NS3FTCXZW3SCF73BK5JYWIHBZRMQBYPDOCIAG7EKA"}
2022-05-11T08:34:07.232Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QM5DV2W54DIMANQFOKHJPN325VLRIEVATXHTE7I2NAWQKIR5MRMA"}
2022-05-11T08:34:07.305Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ZBHWCE5OATNEYNSLJS7X2KADDLYMZVKBHHQYGAVXTKCGPZN5FOTA"}
2022-05-11T08:34:07.308Z	INFO	collector	collect	{"Process": "storagenode", "count": 5}
2022-05-11T08:34:07.775Z	INFO	orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6	sending	{"Process": "storagenode", "count": 416}
2022-05-11T08:34:07.775Z	INFO	orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB	sending	{"Process": "storagenode", "count": 113}
2022-05-11T08:34:07.776Z	INFO	orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo	sending	{"Process": "storagenode", "count": 6}
2022-05-11T08:34:07.776Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	sending	{"Process": "storagenode", "count": 227}
2022-05-11T08:34:07.776Z	INFO	orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S	sending	{"Process": "storagenode", "count": 703}
2022-05-11T08:34:07.776Z	INFO	orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs	sending	{"Process": "storagenode", "count": 205}
2022-05-11T08:34:07.969Z	INFO	orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.241Z	INFO	orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.242Z	INFO	orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.627Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "AZDHBMYICMKC27JFIAXGGD672UDALU3UEFT72PK3SYVTGJMAASDA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT_REPAIR", "Size": 2319360}
2022-05-11T08:34:08.997Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	finished	{"Process": "storagenode"}
2022-05-11T08:34:09.949Z	INFO	orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6	finished	{"Process": "storagenode"}
2022-05-11T08:34:10.836Z	INFO	orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S	finished	{"Process": "storagenode"}

Please move/remove this if its not in the right place, not trying to hijack.
I have been getting “piecestore download failed” spamming my logs for a long time. The reputation score is great.

Don’t know if its effecting my node, but i don’t see my other nodes with this error spamming the logs. Cant seem to track down anything in the forums that offer a solution other then “ignore it”.

“ERROR piecestore download failed {“Process”: “storagenode”, “Piece ID”: “{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}”, “Satellite ID”: “{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}”, “Action”: “GET”, “Offset”: 0, “Size”: 8192, “Remote Address”: “{xxx.xxx.xxx.xxx}”, “error”: “hashstore: file does not exist”, “errorVerbose”: “hashstore: file does not exist\n\tstorj.io/storj/storagenode/hashstore.(*DB).Read:359\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).Reader:298\n\tstorj.io/storj/storagenode/piecestore.(*MigratingBackend).Reader:180\n\tstorj.io/storj/storagenode/piecestore.(*TestingBackend).Reader:105\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:676\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:302\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35”}”

There is no solution/workaround for the lost piece. And if the auditor or repairer or the customer would require it, they would ask for it from your node (since your node one of the segment holders), of course the libuplink would use other remained pieces to reconstruct the segment. And while the segment health is higher than a repair threshold, the satellite would do nothing.
When this segment would require a repair, then maybe the pointer to your node would be removed at the end.

1 Like

So that explains it spamming the log every 7 to 10 entries? Wouldn’t it only show in the logs when that piece is asked for? Although, sometimes it is the same is “Piece ID” repeated a only couple of times, but mostly its many different piece IDs constantly spamming the log. I would think so many pieces that “does not exist”, on my node, would drastically hurt my reputation score over the last year or so of this happening!?!?!?! But its not!

I have also read on these forums that this error may be a result of customers canceling downloads, or my node losing the race over faster internet connections (this node is on a basic home internet connection with only a 10Mb/s upload; my other node, that does NOT have these errors, is on a business internet with 500Mb/s upload). Is there any truth to any of that? It would explain why its constantly happening, for so many months at my home lab!!

Is there anything i can run that will scrub my node for missing pieces once and for all? Or just live with/ignore it?

I think only if this error happens on audit or get_repair requests.
Normal downloads do not affect the score.
I am not sure but I think we had this error in the past also when a file was already deleted like by expiry or something but still getting requested.

1 Like

No, this would end as “context canceled” or “download canceled”, not with “file not found”.