ERROR piecestore download failed *.sj1: no such file or directory

The node5 runs a couple of days with redirected logs, so got fished the following:

2022-05-08T03:11:28.900Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "7WSYPKRDZOQXGJHUN7NE2G2JHKSKEJKCC7PNOYCVJZAUKU6AQOQA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "error": "pieces error: stat config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/7w/sypkrdzoqxgjhun7ne2g2jhkskejkcc7pnoycvjzauku6aqoqa.sj1: no such file or directory", "errorVerbose": "pieces error: stat config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/7w/sypkrdzoqxgjhun7ne2g2jhkskejkcc7pnoycvjzauku6aqoqa.sj1: no such file or directory\n\tstorj.io/storj/storagenode/pieces.NewReader:224\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:273\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:542\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}

2022-05-08T04:37:00.402Z	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "GPI46HGPFFHFCZBXIZD4KHSSUYWSFWTPQBKGI7HJQSVND6YWAPHQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "error": "pieces error: stat config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/gp/i46hgpffhfczbxizd4khssuywsfwtpqbkgi7hjqsvnd6ywaphq.sj1: no such file or directory", "errorVerbose": "pieces error: stat config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/gp/i46hgpffhfczbxizd4khssuywsfwtpqbkgi7hjqsvnd6ywaphq.sj1: no such file or directory\n\tstorj.io/storj/storagenode/pieces.NewReader:224\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:273\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:542\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}

2022-05-08T04:44:14.891Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "S7ZR6BHHXJ4FZCCUQIMDRHARW43G2WSUPG6IDPH4YK5376MRWLKA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "error": "unexpected EOF", "errorVerbose": "unexpected EOF\n\tstorj.io/common/rpc/rpcstatus.Error:82\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:347\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52", "Size": 0}

2022-05-08T21:51:03.798Z	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "LPKGHD2ZB3YMQP6NJCB5NHERRORYUFV7X7TKZ6RW644F6Z3OKFEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}

2022-05-08T21:51:04.135Z	INFO	piecestore	downloaded	{"Process": "storagenode", "Piece ID": "LPKGHD2ZB3YMQP6NJCB5NHERRORYUFV7X7TKZ6RW644F6Z3OKFEQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}

2022-05-09T23:51:40.444Z	ERROR	piecestore	upload failed	{"Process": "storagenode", "Piece ID": "I64EH2FTMA3FJ523TYJTIGTTESXXVNZ5MRWLD47ONKGE6CK7CJJQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "error": "unexpected EOF", "errorVerbose": "unexpected EOF\n\tstorj.io/common/rpc/rpcstatus.Error:82\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:347\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:220\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52", "Size": 794624}

2022-05-10T00:42:06.132Z	ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F4G6TP3QJHYOEWOTEX65EADNA6ZXRWWPMDJ3KPAWA44IW4L7F2GQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

2022-05-10T05:42:06.165Z	ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "Z5E4GCMMNCJMLYP6DYPA26HE76NRT73MA4UMS5UO5ZEUQSHKG77A", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:103\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:299\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

What can I do with them?
collector unable to delete piece - this one could fix with creating the empty file in appropriate blobs folder, but I don’t know how it links with the other errors.

Not a good error. Your node lost a piece or have no access to that file. If this error would be during audit or repair - this audit will be failed.
Please check is this directory and the file are exist? It should be in your data location.
I would like to suggest to stop your node and check this disk for errors and fix them.

via the CHKDSK?

No such file represented:

2022-05-10_10-32-44

Yes, with chkdsk /f

Then this file is lost. The satellite is sure that this piece should be exist on your node, so I would expect an audit score affected, if the satellite would try to audit it.

Is there some procedure to fix this?

CHKDSK X: with /F /R?
Or /F only?

On your node - no. It will be recovered by repair service to the other nodes when the number of healthy pieces will drop to a repair threshold.

You can run chkdsk x: /f first. If you suspect that there are bad blocks, then you can try to run chkdsk x: /f /r

I’ve been checked with CHKDSK /F and the system not found any problem:

After CHKDSK node not starts:
docker: Error response from daemon: invalid mount config for type "bind": stat /run/desktop/mnt/host/m/identity/storagenode5: invalid argument.

You need to stop and remove the container and run it back with all your parameters.
If disk was temporary dismounted (like in case of chkdsk), docker stop will not be enough, you need to re-create the container.

it’s after:
docker stop storagenode5 chkdsk M: /F docker start storagenode5
error
docker stop storagenode5 docker rm storagenode5 docker run -d --restart unless-stopped...
error

the disk :M is visible in Windows and let me in.

Perhaps the Windows version requires restart of the docker engine after the disk checking. Weird.
Please try to restart a docker engine from the Docker desktop.

I’ve restarted the whole PC and the command docker run... was rolled up.

But there is still no solution of this error - should I leave this in current state or do something?

Do I need to make a dummy file for these file to close this type of error?

You should not do anything for these errors.

They should disappear after a week. If they are not - then you can try to apply this workaround.

Ok!
Thank you for all the support you have provided, Alexey!

1 Like

I would like to add a small caveat to this. I’ve seen these errors on my node with normal downloads and deletes. They are infrequent, but sometimes a customer deletes something or a piece has expired and the customer tries to download or delete it after expiration. This would lead to the same error, but doesn’t necessarily mean your node lost a piece. This will of course not impact your scores and the audit and repair processes check for this, so it should never impact scores. If audits or repairs fail with this error, it’s a different case and your node certainly has lost data.

1 Like

Would it be possible to show audit & suspension score in the same log line that could affect the score ?

I was thinking something like .. GET_REPAIR ... failed ... <audit= 0; suspension=1>

Here suspension score is affected shown as 1 (0 when not affected)