Requests to download deleted file

I found few topics with “file does not exist” but I think I have different case.
Here are few examples from my log. Unfortunately I don’t have all logs from day 1 to check all events.
File was available and downloaded several times without issues

{“L”:“INFO”,“T”:“2021-04-02T08:21:31.849Z”,“N”:“piecestore”,“M”:“downloaded”,“Piece ID”:“HZXBMW6BFOY5CB7BWJ36FYH4WMQPELH3N44Z6EBZ2IRLXVMK7EXA”,“Satellite ID”:“12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,“Action”:“GET”}

Later this file was deleted and I have following entry in the log

{“L”:“INFO”,“T”:“2021-04-04T09:14:05.452Z”,“N”:“collector”,“M”:“delete expired”,“Satellite ID”:“12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,“Piece ID”:“HZXBMW6BFOY5CB7BWJ36FYH4WMQPELH3N44Z6EBZ2IRLXVMK7EXA”}

And 30+ events over few days to download this file that was already deleted

{“L”:“ERROR”,“T”:“2021-04-15T11:01:01.408Z”,“N”:“piecestore”,“M”:“download failed”,“Piece ID”:“HZXBMW6BFOY5CB7BWJ36FYH4WMQPELH3N44Z6EBZ2IRLXVMK7EXA”,“Satellite ID”:“12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,“Action”:“GET”,“error”:“file does not exist”,“errorVerbose”:“file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:508\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

Any guess what is wrong?

So you have no old logs ? but is this still and issue in your current logs and still a problem (those logs above have days of gaps in them) ? I’m assuming your on Docker ? on Linux ? on a current build 1.25 ? or 1.26 - if it’s in your current running logs can you post the error.

Check you current logs for any “collector delete expired” of type failed, with action “File does not exist” - the job runs every hour, so if you aren’t seeing an error from collector every hour, then you are good - If you are than its annoying issue but isn’t bad and can be fixed by editing the SQL table for delete expired (at your own risk) - It’s caused by poor disk performance during the delete expired running, the file gets deleted but the delete expired process times out and fails to update the SQL table - you will find it updates the 2nd to last column from Null to date time stamp every hour, you can use that to select on.

If your logs is till showing “piecestore download failed” with action of Get Error file does not exist, than you’ve lost a file, and the satellite doesn’t know about it, else you wouldn’t keep getting a request for it.

Both errors from my limited knowledge point at possible disk issues - doubt it’s failing but could do with interface, is it on USB (if it is usb, post output from a “lsusb -t” ? If it’s not on USB and is internal, than probably worth running a fsck to make sure you have no bad bits.

I am using docker on linux and it was recently updated to the recent 1.26 version. Watchtower is working to update and it is working fine.
No usb. Direct sata disk connection.

The only part that is confusing me - 2 weeks passed between delete expired and such errors in the logs are still coming. For one piece that error is repeated 30+ times.
I don’t think there is issue with my file storage since I got success downloads before the delete expired event. After the delete expired the file it is obvious that file is deleted so it cannot be retrieved any more.
Why satellite is still proposing this file to download by clients? I am quite sure satellite deleted that file earlier but I don’t have logs from previous month.

Anyway thanks for sharing your thoughts.

Looks like this error is gone. Something was not updated on satellite side so someone constantly tried to download deleted file.
2021-04-15T17:21:13.415Z - the last attempt to download
2021-04-04T09:14:05.452Z - “M”:“delete expired”
If I am not wrong expired file is deleted after one week from the trash. So more that 2 weeks someone tried to download deleted file.

{“L”:“ERROR”,“T”:“2021-04-15T17:21:13.415Z”,“N”:“piecestore”,“M”:“download failed”,“Piece ID”:“HZXBMW6BFOY5CB7BWJ36FYH4WMQPELH3N44Z6EBZ2IRLXVMK7EXA”,“Satellite ID”:“12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”,“Action”:“GET”,“error”:“file does not exist”,“errorVerbose”:“file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:508\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

I can still see the error here today.

Hi, this is an issue with expired objects not being cleaned up properly.

Some of the database queries didn’t scale as well as we hoped and hence it was disabled. Hopefully it’ll be solved with the next satellite release, unless there’s some other issues.

There’s no negative downsides to the storage nodes besides the confusing errors.

3 Likes

I assume that means these pieces despite still being in the database would still be excluded from audits then?

Yes, they are excluded from audits.

1 Like