Piece uploaded, deleted, and then failed audit

I have noticed that some pieces fail audit. Searching these pieces in the logs I see that they were uploaded, deleted, and then audit failed with file does not exist:

Example 1:

% ag JBWD7DKK3U2W7ISOZB43QTSREDG6DYYQ7KLYITUS2QHBZKVWNO2Q
storj.log.4
52323:2022-10-17T05:56:19.314-0700	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "JBWD7DKK3U2W7ISOZB43QTSREDG6DYYQ7KLYITUS2QHBZKVWNO2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 6756607734177}
52325:2022-10-17T05:56:19.364-0700	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "JBWD7DKK3U2W7ISOZB43QTSREDG6DYYQ7KLYITUS2QHBZKVWNO2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 92672}

storj.log.2
101541:2022-10-19T10:55:27.292-0700	INFO	piecedeleter	delete piece sent to trash	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "JBWD7DKK3U2W7ISOZB43QTSREDG6DYYQ7KLYITUS2QHBZKVWNO2Q"}
101735:2022-10-19T10:56:46.214-0700	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "JBWD7DKK3U2W7ISOZB43QTSREDG6DYYQ7KLYITUS2QHBZKVWNO2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"}
101736:2022-10-19T10:56:46.214-0700	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "JBWD7DKK3U2W7ISOZB43QTSREDG6DYYQ7KLYITUS2QHBZKVWNO2Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:554\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}

Example 2:

% ag 3BUKHNIZLHO5EFZP6CZL3IKRUHHV5G4HK5SKYWHNWK4NHGGQ3C2A
storj.log.4
17420:2022-10-17T02:10:02.637-0700	INFO	piecestore	upload started	{"Process": "storagenode", "Piece ID": "3BUKHNIZLHO5EFZP6CZL3IKRUHHV5G4HK5SKYWHNWK4NHGGQ3C2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 6762919243425}
17422:2022-10-17T02:10:03.307-0700	INFO	piecestore	uploaded	{"Process": "storagenode", "Piece ID": "3BUKHNIZLHO5EFZP6CZL3IKRUHHV5G4HK5SKYWHNWK4NHGGQ3C2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 290304}

storj.log.2
53529:2022-10-19T06:46:35.678-0700	INFO	piecedeleter	delete piece sent to trash	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "3BUKHNIZLHO5EFZP6CZL3IKRUHHV5G4HK5SKYWHNWK4NHGGQ3C2A"}
53968:2022-10-19T06:49:00.571-0700	INFO	piecestore	download started	{"Process": "storagenode", "Piece ID": "3BUKHNIZLHO5EFZP6CZL3IKRUHHV5G4HK5SKYWHNWK4NHGGQ3C2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT"}
53969:2022-10-19T06:49:00.571-0700	ERROR	piecestore	download failed	{"Process": "storagenode", "Piece ID": "3BUKHNIZLHO5EFZP6CZL3IKRUHHV5G4HK5SKYWHNWK4NHGGQ3C2A", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_AUDIT", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:554\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:61\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}

Notes:
I rotate logs daily, so the number next to the log is how long ago that happened from today.

This does not happen often, but it does happen.

Is this server side auditing bug?

Perhaps. Could you please give me your NodeID?

It’s 1zWMZAUsyxpi1v9J5Me9ukcdKiaCDEpeftiTSdcrXJh7RvHEyf

1 Like

Seems your audit score is not affected so far, however, I passed your information to the team.

2 Likes

If your audit score has not been affected, then this might be some tests Storj is doing.

3 Likes