ERROR piecestore download failed *.sj1: no such file or directory

You can already derive that. Only audits and get repair impact scores. And the only error you see that impacts the audit score is file does not exist. Everything else impacts suspension scores.

But it isn’t complete, if you deliver bad data, the transfer finishes just fine, but the failure will impact your audit score without an error on the node side. If your node has trouble responding you may not see any logs at all, but depending on what happens it can hit your suspension or audit score.

1 Like

How to dig deeper and dismantle that problem?
I have got a really big pain within every 2-month-window, so trying to figure out to solve this case.

For now I have about 1600+ error messages in logs and most of them (about 95%)
unable to delete piece... "pieces error: filestore error: file does not exist"
that mentioned 2 missing files since 6th of May on a node5.

The other 5% is:
piecestore download failed ... "Action": "GET", "error": "pieces error
piecestore upload failed ... "Action": "PUT", "error": "unexpected EOF"
pieces:trash emptying trash failed ... "pieces error: filestore error: context canceled

I’m just aware to moving on Debian from Windows, cause of lack of my skills in this environment.

Are you saying it’s the same 2 pieces over and over again?
If that’s the case the impact may be limited, but that does suggest your node lost data. Have a look at this post to work around repeating errors like that.

Yep, the same cycling errors with 2 missing files query (the operation repeats every hour).
Thanks, try to look for.

Upd:
It seems that I have driven the cleaning with that node (node5) about 2 month ago.

Thank you for an information, I will try to make it again.

It seems that I had replaced this one previously but it appeared again:

2022-05-06T07:42:06.121Z	ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Looks like the Node (node5) stopped slipping:

2022-05-11T08:34:06.924Z	INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
2022-05-11T08:34:07.003Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ"}
2022-05-11T08:34:07.023Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "Z5E4GCMMNCJMLYP6DYPA26HE76NRT73MA4UMS5UO5ZEUQSHKG77A"}
2022-05-11T08:34:07.140Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QFMSS774W32NS3FTCXZW3SCF73BK5JYWIHBZRMQBYPDOCIAG7EKA"}
2022-05-11T08:34:07.232Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QM5DV2W54DIMANQFOKHJPN325VLRIEVATXHTE7I2NAWQKIR5MRMA"}
2022-05-11T08:34:07.305Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ZBHWCE5OATNEYNSLJS7X2KADDLYMZVKBHHQYGAVXTKCGPZN5FOTA"}
2022-05-11T08:34:07.308Z	INFO	collector	collect	{"Process": "storagenode", "count": 5}
2022-05-11T08:34:07.775Z	INFO	orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6	sending	{"Process": "storagenode", "count": 416}
2022-05-11T08:34:07.775Z	INFO	orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB	sending	{"Process": "storagenode", "count": 113}
2022-05-11T08:34:07.776Z	INFO	orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo	sending	{"Process": "storagenode", "count": 6}
2022-05-11T08:34:07.776Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	sending	{"Process": "storagenode", "count": 227}
2022-05-11T08:34:07.776Z	INFO	orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S	sending	{"Process": "storagenode", "count": 703}
2022-05-11T08:34:07.776Z	INFO	orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs	sending	{"Process": "storagenode", "count": 205}
2022-05-11T08:34:07.969Z	INFO	orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.241Z	INFO	orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.242Z	INFO	orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.627Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "AZDHBMYICMKC27JFIAXGGD672UDALU3UEFT72PK3SYVTGJMAASDA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT_REPAIR", "Size": 2319360}
2022-05-11T08:34:08.997Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	finished	{"Process": "storagenode"}
2022-05-11T08:34:09.949Z	INFO	orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6	finished	{"Process": "storagenode"}
2022-05-11T08:34:10.836Z	INFO	orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S	finished	{"Process": "storagenode"}