ERROR piecestore delete failed

this is a new error anything to be concerned with | 50-253-191-147-static:~ neo$ docker logs storagenode 2>&1 | grep “file does not exist”
2019-12-29T13:31:34.425Z ERROR piecestore delete failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “BTT26LZBTDMZXGK6TLFVJ5NITH4VQPAWNSDLLXZPI2Y6WQ5Z7F6Q”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:82\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:170\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:262\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Delete:161\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func3:1001\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}
2019-12-29T20:31:36.509Z ERROR piecestore delete failed {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “SYB62K6K2YZSER4B7UYXSFLUP4RJ5SOZTGVO7XP33AVWDN4I23PA”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:82\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:170\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:262\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Delete:161\n\tstorj.io/storj/pkg/pb.DRPCPiecestoreDescription.Method.func3:1001\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

Have you tried searching your logs with the Piece ID ?

no the logs have been running since the release date that is just to long

Do you mean release from the current version of node ?

How does length of log matter here ?

I turned on the preflight checks and the timesync was ok. But I searched for “db error” in the logs and this came up (a lot of them). It’s three different db it’scomplaining about. Is this something to worry about, the dashboard says ONLINE.

2020-01-24T00:19:01.306Z ERROR piecestore delete failed {“error”: “piecestore: pieces error: v0pieceinfodb error: context canceled”, “errorVerbose”: “piecestore: pieces error: v0pieceinfodb error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Delete:164\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:280\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).DeletePiece:235\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func4:1091\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “GELZ7ZYXXNSTFZQLUU5BVW7TSZDUN6D2H7O76LZVWIXTMCXVYMHA”}
2020-01-24T00:21:34.425Z ERROR piecestore failed to add order {“error”: “ordersdb error: database is locked”, “errorVerbose”: “ordersdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*ordersDB).Enqueue:52\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).saveOrder:721\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:472\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:269\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:1066\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

2020-01-24T04:58:36.638Z ERROR server gRPC stream error response {“error”: “piecestore: serial number is already used: usedserialsdb error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:537\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:484\n\tstorj.io/common/pb._Piecestore_Download_Handler:1446\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696”, “errorVerbose”: “piecestore: serial number is already used: usedserialsdb error: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:77\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:537\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:484\n\tstorj.io/common/pb._Piecestore_Download_Handler:1446\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:538\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:484\n\tstorj.io/common/pb._Piecestore_Download_Handler:1446\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696”}

These are three different things unrelated to the preflight checks. The first was mentioned elsewhere. Not sure why it happens, but the delete will be fixed in garbage collection and I believe it is already being looked into. The second is a locked database. A sign of contention on the database, but it can happen shortly after a restart and usually goes away by itself. The third might be an uplink trying to use the same serial for a new transfer. Not sure how that happens, but your node SHOULD reject reuse of the same serial. I don’t think there’s anything to worry about.

I just noticed the same error in my logs. First the delete failed

2020-01-24T07:01:04.062Z        INFO     piecestore      upload started  {"Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT"}
2020-01-24T07:01:04.521Z        INFO     piecestore      uploaded        {"Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT"}
2020-01-24T07:04:52.717Z        ERROR    piecestore      delete failed   {"error": "piecestore: pieces error: context canceled; v0pieceinfodb error: context canceled", "errorVerbose": "piecestore: pieces error: context canceled; v0pieceinfodb error: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:283\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).DeletePiece:235\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func4:1091\n\tstorj.io/drpc/drpcserver.(*Server).doHandle:175\n\tstorj.io/drpc/drpcserver.(*Server).HandleRPC:153\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:114\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:147\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ"}

and since then my log gets spammed with these errors about once an hour

2020-02-04T02:51:33.573Z        ERROR    collector       unable to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:83\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:228\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:210\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:272\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:55\n\tstorj.io/common/sync2.(*Cycle).Run:147\n\tstorj.io/storj/storagenode/collector.(*Service).Run:51\n\tstorj.io/storj/storagenode.(*Peer).Run.func4:506\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-04T03:51:33.719Z        ERROR    collector       unable to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:83\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:228\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:210\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:272\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:55\n\tstorj.io/common/sync2.(*Cycle).Run:147\n\tstorj.io/storj/storagenode/collector.(*Service).Run:51\n\tstorj.io/storj/storagenode.(*Peer).Run.func4:506\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2020-02-04T04:51:33.567Z        ERROR    collector       unable to delete piece  {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:83\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:228\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:210\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:272\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:55\n\tstorj.io/common/sync2.(*Cycle).Run:147\n\tstorj.io/storj/storagenode/collector.(*Service).Run:51\n\tstorj.io/storj/storagenode.(*Peer).Run.func4:506\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

It’s only this one piece 6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ

@Alexey: Any idea how to get rid of this error? Is this a bug? Shouldn’t this peace have been moved to trash and deleted by now?

I created an issue to investigate by engineers.

1 Like

Do you have such errors in the log @donald.m.motsinger?

I still have these errors once an hour for this one piece but no further pieces.

Could you please post those lines?
Also, what is the version of storagenode?

It’s the same error as above

|2020-08-10T02:50:36.248Z|ERROR|collector|unable to delete piece|{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:97\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}|
|2020-08-10T03:50:39.823Z|ERROR|collector|unable to delete piece|{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:97\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}|
|2020-08-10T04:50:35.342Z|ERROR|collector|unable to delete piece|{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:97\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}|
|2020-08-10T05:50:35.229Z|ERROR|collector|unable to delete piece|{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:97\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}|
|2020-08-10T06:50:53.448Z|ERROR|collector|unable to delete piece|{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:97\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}|
|2020-08-10T07:50:35.246Z|ERROR|collector|unable to delete piece|{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "6AVHSSMBOG4JQAIFLK7V76UUDXRC5ZEIQFQMTXD726CBA7BUL4ZQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:97\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:238\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:281\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func1:56\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}|

Node version 1.9.5
Docker version 19.03.12, build 48a66213fe
Ubuntu 16.04.7 LTS

Thanks! I updated an issue.

@donald.m.motsinger Can you check please in 1.10.1?

Seems to have been solved an 2020-08-12. No such errors since then anymore. So it was the update before 1.10.1 as this update was done just 3 days ago.

1 Like

I have to correct myself, the problem was never solved. grep just didn’t search further due to the error Binary file node.log matches. The log contains some lines containing ^@^@^@^@^@^@^… and grep just terminates if not using the -a option.

It doesn’t really matter to me anymore as this node got disqualified on that satellite (I might write another post about it).

I’m getting this error for 2 different pieces once every hour for the past day. Only these two pieces and both for europe-west-1 satellite.

I have a few nodes and this is the only one that uses the native Linux binary; the rest are on Docker. I don’t believe that would make a difference but I figure I might as well point that out. My audit and suspension scores on this satellite are both 0.9999995.

Is there any more info I can provide to help diagnose this error further?

2021-02-12T22:00:46-07:00 {} storj03: {"L":"ERROR","T":"2021-02-12T22:00:45.970-0700","N":"collector","M":"unable to delete piece","Satellite ID":"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs","Piece ID":"COI2OR26SELNDVXUJTRGZXUTZAR6IBIJEKKQJZR7M4J6X2AMZSZA","error":"pieces error: filestore error: file does not exist","errorVerbose":"pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}
2021-02-12T22:00:45-07:00 {} storj03: {"L":"ERROR","T":"2021-02-12T22:00:45.854-0700","N":"collector","M":"unable to delete piece","Satellite ID":"12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs","Piece ID":"ACIAHWNTG2GC2SDSL6VNQHPIIJBMLRKCCWKOPIKN5HYNPHKM35HQ","error":"pieces error: filestore error: file does not exist","errorVerbose":"pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:99\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:239\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Delete:220\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:298\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:152\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:81\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:80\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"}

Your node managed to lost pieces. But you are lucky, that the garbage collector only found that and not auditor.
Have you cleaned up the trash manually?

That is little bit concerning though. I think it’s worth to check your disk for errors.

I have not. I migrated this node to a bigger drive a few months ago and put it in an Odroid HC2 so maybe something was lost there. I did the the usual rsync with --delete once while the node was running, and then again once the node was stopped.

I agree. I haven’t had any GET_AUDIT errors for this node that I know of, although I’ve only been storing logs permanently for the past 2-3 weeks now. I’ll keep an eye on the disk.

Thanks.