Storj lost one piece? Repeated "file does not exist" in GET_REPAIR

Started to see these errors on a regular basis in my GET_REPAIR logs (I have set up email alerting on REPAIR traffic) about the B2O25KUSH4TEBUWXNFYETS7YGKQB4H42EUHHFJP2GYHBR4QWM7IQ Piece ID. The first one was today at 2019-11-08T02:55:53.330Z (actually I see one from one month ago at 2019-10-07T04:17:38.778Z about piece IYZ5S2UQRU7ULH7MDWTA33QOPSDKZ5UGD62BUNIGJ7GSZZZKYMGQ but that was only one occurrence).

I don’t think my storage is corrupted on a physical level, the host’s uptime is 64+ days and I didn’t have any ungraceful system shutdowns since I installed storj node.

Here’s a docker logs storagenode 2>&1 | grep "file does not exist":

2019-11-08T02:55:53.330Z        ESC[34mINFOESC[0m       piecestore      download failed {"Piece ID": "B2O25KUSH4TEBUWXNFYETS7YGKQB4H42EUHHFJP2GYHBR4QWM7IQ", "Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET_REPAIR", "error": "file does not exist"}
2019-11-08T02:55:53.330Z        ESC[31mERRORESC[0m      server  gRPC stream error response      {"error": "file does not exist"}
2019-11-08T03:09:40.744Z        ESC[34mINFOESC[0m       piecestore      download failed {"Piece ID": "B2O25KUSH4TEBUWXNFYETS7YGKQB4H42EUHHFJP2GYHBR4QWM7IQ", "Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET_REPAIR", "error": "file does not exist"}
2019-11-08T03:09:40.745Z        ESC[31mERRORESC[0m      server  gRPC stream error response      {"error": "file does not exist"}
... skipped a few more attempts ...
2019-11-08T10:22:59.033Z        ESC[34mINFOESC[0m       piecestore      download failed {"Piece ID": "B2O25KUSH4TEBUWXNFYETS7YGKQB4H42EUHHFJP2GYHBR4QWM7IQ", "Satellite ID": "118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW", "Action": "GET_REPAIR", "error": "file does not exist"}
2019-11-08T10:22:59.034Z        ESC[31mERRORESC[0m      server  gRPC stream error response      {"error": "file does not exist"}

Is there anything I can do to help you address this issue?

We currently have a handfull of halve deleted segments. The uplink wanted to delete them, was sending a delete command to your storage node but at the end was unable to delete the pointer from the satellite. We identified a few edge cases like that and are working on solution.