ERROR piecestore download failed *.sj1: no such file or directory

On your node - no. It will be recovered by repair service to the other nodes when the number of healthy pieces will drop to a repair threshold.

You can run chkdsk x: /f first. If you suspect that there are bad blocks, then you can try to run chkdsk x: /f /r

I’ve been checked with CHKDSK /F and the system not found any problem:

After CHKDSK node not starts:
docker: Error response from daemon: invalid mount config for type "bind": stat /run/desktop/mnt/host/m/identity/storagenode5: invalid argument.

You need to stop and remove the container and run it back with all your parameters.
If disk was temporary dismounted (like in case of chkdsk), docker stop will not be enough, you need to re-create the container.

it’s after:
docker stop storagenode5 chkdsk M: /F docker start storagenode5
error
docker stop storagenode5 docker rm storagenode5 docker run -d --restart unless-stopped...
error

the disk :M is visible in Windows and let me in.

Perhaps the Windows version requires restart of the docker engine after the disk checking. Weird.
Please try to restart a docker engine from the Docker desktop.

I’ve restarted the whole PC and the command docker run... was rolled up.

But there is still no solution of this error - should I leave this in current state or do something?

Do I need to make a dummy file for these file to close this type of error?

You should not do anything for these errors.

They should disappear after a week. If they are not - then you can try to apply this workaround.

Ok!
Thank you for all the support you have provided, Alexey!

1 Like

I would like to add a small caveat to this. I’ve seen these errors on my node with normal downloads and deletes. They are infrequent, but sometimes a customer deletes something or a piece has expired and the customer tries to download or delete it after expiration. This would lead to the same error, but doesn’t necessarily mean your node lost a piece. This will of course not impact your scores and the audit and repair processes check for this, so it should never impact scores. If audits or repairs fail with this error, it’s a different case and your node certainly has lost data.

1 Like

Would it be possible to show audit & suspension score in the same log line that could affect the score ?

I was thinking something like .. GET_REPAIR ... failed ... <audit= 0; suspension=1>

Here suspension score is affected shown as 1 (0 when not affected)

You can already derive that. Only audits and get repair impact scores. And the only error you see that impacts the audit score is file does not exist. Everything else impacts suspension scores.

But it isn’t complete, if you deliver bad data, the transfer finishes just fine, but the failure will impact your audit score without an error on the node side. If your node has trouble responding you may not see any logs at all, but depending on what happens it can hit your suspension or audit score.

1 Like

How to dig deeper and dismantle that problem?
I have got a really big pain within every 2-month-window, so trying to figure out to solve this case.

For now I have about 1600+ error messages in logs and most of them (about 95%)
unable to delete piece... "pieces error: filestore error: file does not exist"
that mentioned 2 missing files since 6th of May on a node5.

The other 5% is:
piecestore download failed ... "Action": "GET", "error": "pieces error
piecestore upload failed ... "Action": "PUT", "error": "unexpected EOF"
pieces:trash emptying trash failed ... "pieces error: filestore error: context canceled

I’m just aware to moving on Debian from Windows, cause of lack of my skills in this environment.

Are you saying it’s the same 2 pieces over and over again?
If that’s the case the impact may be limited, but that does suggest your node lost data. Have a look at this post to work around repeating errors like that.

Yep, the same cycling errors with 2 missing files query (the operation repeats every hour).
Thanks, try to look for.

Upd:
It seems that I have driven the cleaning with that node (node5) about 2 month ago.

Thank you for an information, I will try to make it again.

It seems that I had replaced this one previously but it appeared again:

2022-05-06T07:42:06.121Z	ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Looks like the Node (node5) stopped slipping:

2022-05-11T08:34:06.924Z	INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
2022-05-11T08:34:07.003Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ"}
2022-05-11T08:34:07.023Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "Z5E4GCMMNCJMLYP6DYPA26HE76NRT73MA4UMS5UO5ZEUQSHKG77A"}
2022-05-11T08:34:07.140Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QFMSS774W32NS3FTCXZW3SCF73BK5JYWIHBZRMQBYPDOCIAG7EKA"}
2022-05-11T08:34:07.232Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "QM5DV2W54DIMANQFOKHJPN325VLRIEVATXHTE7I2NAWQKIR5MRMA"}
2022-05-11T08:34:07.305Z	INFO	collector	delete expired	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "ZBHWCE5OATNEYNSLJS7X2KADDLYMZVKBHHQYGAVXTKCGPZN5FOTA"}
2022-05-11T08:34:07.308Z	INFO	collector	collect	{"Process": "storagenode", "count": 5}
2022-05-11T08:34:07.775Z	INFO	orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6	sending	{"Process": "storagenode", "count": 416}
2022-05-11T08:34:07.775Z	INFO	orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB	sending	{"Process": "storagenode", "count": 113}
2022-05-11T08:34:07.776Z	INFO	orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo	sending	{"Process": "storagenode", "count": 6}
2022-05-11T08:34:07.776Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	sending	{"Process": "storagenode", "count": 227}
2022-05-11T08:34:07.776Z	INFO	orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S	sending	{"Process": "storagenode", "count": 703}
2022-05-11T08:34:07.776Z	INFO	orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs	sending	{"Process": "storagenode", "count": 205}
2022-05-11T08:34:07.969Z	INFO	orders.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.241Z	INFO	orders.12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.242Z	INFO	orders.12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs	finished	{"Process": "storagenode"}
2022-05-11T08:34:08.627Z	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "AZDHBMYICMKC27JFIAXGGD672UDALU3UEFT72PK3SYVTGJMAASDA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "PUT_REPAIR", "Size": 2319360}
2022-05-11T08:34:08.997Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	finished	{"Process": "storagenode"}
2022-05-11T08:34:09.949Z	INFO	orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6	finished	{"Process": "storagenode"}
2022-05-11T08:34:10.836Z	INFO	orders.12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S	finished	{"Process": "storagenode"}