ERROR piecedeleter could not send delete piece to trash: file does not exist

2023-01-28T16:28:14.639Z	ERROR	piecedeleter	could not send delete piece to trash	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "BBCAXYPNQLVSM7J6JOD46L7QVBHHUW7RRFPUISABEKA4NY7FQOFQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

I get some errors like this. Is there a problem on my end?

Got that one too: piece uploaded, then deleted and file not found. A bit scary.

Searching the forum I found some explanations. I will check all my nodes db-es with sqlite3 just to be sure there are no problems.

Hello @MarcVanWijk,
Welcome to the forum!

Please provide these lines from your logs here between two new lines with three backticks like this:

```
log lines here
```

The workaround is known, but if you could provide log lines like described by @MarcVanWijk it could show the problem in the code.

Hi @Alexey, it actually happened twice. Please find below log lines containing the piece IDs with 3 lines of context:

[storagenode_logs]grep -hC 3 MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA node.log-202301*
2023-01-11T00:30:17.082Z        INFO    piecestore      download started        {"Process": "storagenode", "Piece ID": "4POOWBEYWLYZVOOGMH5P7CJHI4CVO2JN2MJ7J4AXI5ODI5Z2F2IA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2023-01-11T00:30:17.289Z        INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "4POOWBEYWLYZVOOGMH5P7CJHI4CVO2JN2MJ7J4AXI5ODI5Z2F2IA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2023-01-11T00:30:18.663Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "KX65NJCSU3SBLUF2ZHBCXIJNRNXV2KNJL44SRXYFFAZCMM6QVHZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1872172076996}
2023-01-11T00:30:18.718Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 1872172076996}
2023-01-11T00:30:18.795Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 768}
2023-01-11T00:30:20.276Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "KX65NJCSU3SBLUF2ZHBCXIJNRNXV2KNJL44SRXYFFAZCMM6QVHZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 472064}
2023-01-11T00:30:20.948Z        INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "454TWLXCKEGHFSHKD2EQSFLTKUAABYR7IMYAXO6LXBMMT3EH6GUQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET"}
2023-01-11T00:30:21.465Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "IIBGPDB2IAWUYIGIBQ3B2OOA7UV6VMCNTO5MA5PZ7UIQDY35WTRA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1872171603140}
--
2023-01-11T00:45:07.810Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "IW5JAO5Y7MPCASFITQUFCHQJ4JDL3CHDCXRKJVWAFXNLIAY3WZXQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 2319360}
2023-01-11T00:45:12.825Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "CX533UG3UU7TPTUWKFV4UOVTVVGU725UXEL756QMCZZAXRZQKU3Q", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 1872083759556}
2023-01-11T00:45:15.227Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "3WZGOULAYAT2WHZJ6ELYDPANKFOWRMQUQ6HAZPVVHCYQXN5GSB7Q"}
2023-01-11T00:45:17.053Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA"}
2023-01-11T00:45:17.125Z        ERROR   piecedeleter    could not send delete piece to trash    {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-11T00:45:17.125Z        WARN    pieces  failed to migrate v0 piece. Piece may not be recoverable        {"Process": "storagenode"}
2023-01-11T00:45:19.989Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "BKCQKL3L3VT2DZVPIQ4LVIPPBRHV4UZ3VCHW66L2WJ6CNEPZNP4A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 1872083759556}
2023-01-11T00:45:20.150Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "BKCQKL3L3VT2DZVPIQ4LVIPPBRHV4UZ3VCHW66L2WJ6CNEPZNP4A", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 21504}

[storagenode_logs]grep -hC 3 CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA node.log-202301*
2023-01-11T22:00:18.358Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "XRHYBIJL4PJHUL6IEWUEYT4P5FWTFRG66RDHDT5Q25QCWNFFUDUA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 2816}
2023-01-11T22:00:19.388Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "7Z2DE2OACQXJ2JRVTR7LSUFTOU7V7T2VQD4JN6CRO3KL34HUKUNA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 75008}
2023-01-11T22:00:19.712Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "GL6KIDTFU7B43UPUK4XP6BB3QESC6TJELXRKVL6EWMBRGHWVEVXA"}
2023-01-11T22:00:21.530Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 1872642418712}
2023-01-11T22:00:21.637Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 768}
2023-01-11T22:00:25.595Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "UXD73NXRTZU2ABBCX7PXHH5P46NLPF5LX64RJDKB4L4HDQUHVTXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1872642417432}
2023-01-11T22:00:26.350Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "UXD73NXRTZU2ABBCX7PXHH5P46NLPF5LX64RJDKB4L4HDQUHVTXQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 181504}
2023-01-11T22:00:28.578Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "KS7HDSKGKNBGHHM3XZI7CX6EBCQEMNOAOX4G5ERMP6KBP6CGV2LA"}
--
2023-01-11T22:15:18.359Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "GFZMRSKCNDH3N6CDR7ULII7PRWLQN4QONZ5JYY4YM5U4GI4PDIZQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 92416}
2023-01-11T22:15:21.303Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "CLXICBYZNQVOMOUTOEZ4AD2YMWC6JHKFCEJ6HKOC2OXYLGSD6HEA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Available Space": 1872594963736}
2023-01-11T22:15:21.846Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "CLXICBYZNQVOMOUTOEZ4AD2YMWC6JHKFCEJ6HKOC2OXYLGSD6HEA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Size": 1304832}
2023-01-11T22:15:22.561Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA"}
2023-01-11T22:15:22.562Z        ERROR   piecedeleter    could not send delete piece to trash    {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-11T22:15:22.562Z        WARN    pieces  failed to migrate v0 piece. Piece may not be recoverable        {"Process": "storagenode"}
2023-01-11T22:15:29.498Z        INFO    piecestore      downloaded      {"Process": "storagenode", "Piece ID": "7UPPRIY5DFF3JEA3YQIWL2UJALDJIV5LH2K6JY3U4MI72IAS3SMQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET"}
2023-01-11T22:15:33.780Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "3FACLTU4H4BAFSORLER7DGS377S74XHWGUUDAUMI72STHAZA3F5Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1872593658392}

Please note the “pieces failed to migrate v0 piece. Piece may not be recoverable” error.

Both cases happenedd on 11 Jan. I am not sure what version I used at the time, v1.70.2 or a prior version. I am using docker on Synology DSM 7.

If you need more information, please let me know.

Brgds, Marc

No you need to search in your logs the same PieceID

Hi Alexey,

I did, but maybe adding 3 lines of context was confusing, here are the results when searched by piece ID, without any lines of context.

[storagenode_logs]grep -h MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA node.log-202301*
2023-01-11T00:30:18.718Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 1872172076996}
2023-01-11T00:30:18.795Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 768}
2023-01-11T00:45:17.053Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA"}
2023-01-11T00:45:17.125Z        ERROR   piecedeleter    could not send delete piece to trash    {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "MZIFIIVNQRUSUA6APNKSU45ID33KLS22WTZFOLJKTSC6BTMZGYZA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

And the other piece:

[storagenode_logs]grep -h CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA node.log-202301*
2023-01-11T22:00:21.530Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Available Space": 1872642418712}
2023-01-11T22:00:21.637Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "PUT", "Size": 768}
2023-01-11T22:15:22.561Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA"}
2023-01-11T22:15:22.562Z        ERROR   piecedeleter    could not send delete piece to trash    {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "CMRP7B4VNLKO73APVO2WCGYAEEP2T4YECRU6LG4NWDCWNNKE5ROA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Please note that this does not show the “pieces failed to migrate v0 piece. Piece may not be recoverable” warning that ocurrs the very same microsecond as the mentioned error.

Brgds, Marc

I had a power outage of a few hours, and I think that caused the error, in my case. I don’t see it anymore. I also checked the db files and they are OK. UPS saved the day.

this could be a result of pieces corruption (old ones).

Hi Alexey,

I do not have old pieces, my node is just over one month old.

I find it strange that each time I got the “could not send delete piece to trash” error, the same microsecond I got the "pieces failed to migrate v0 piece. Piece may not be recoverable” warning. That cannot be coincidence.

Another curious thing I noticed, the two “could not send delete piece to trash” errors, follow exactly the same pattern: piece with size 768 uploaded and exactly 15 minutes later deleted with error. Both times.

Brgds, Marc

Talking about the devil, yesterday I had another error:

[storagenode_logs]grep -h 35N2F2B4QQ3VVMIQAL6HF2IOOT3Y4WHZQNMM3Q5LTZXTHQIOQSKA *
2023-01-29T14:49:48.809Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "35N2F2B4QQ3VVMIQAL6HF2IOOT3Y4WHZQNMM3Q5LTZXTHQIOQSKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1821776343276}
2023-01-29T14:49:48.884Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "35N2F2B4QQ3VVMIQAL6HF2IOOT3Y4WHZQNMM3Q5LTZXTHQIOQSKA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 768}
2023-01-29T14:49:54.422Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "35N2F2B4QQ3VVMIQAL6HF2IOOT3Y4WHZQNMM3Q5LTZXTHQIOQSKA"}
2023-01-29T14:49:54.838Z        ERROR   piecedeleter    could not send delete piece to trash    {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "35N2F2B4QQ3VVMIQAL6HF2IOOT3Y4WHZQNMM3Q5LTZXTHQIOQSKA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Same file size as the other two (768 bytes), however deletion now followed immedeately after upload.

And, as the other two cases, I again got a "pieces failed to migrate v0 piece. Piece may not be recoverable” warning the same milisecond as the error.

Brgds, Marc

Hi again,

Sorry to be spamming the thread, but I detected another one today with a similar pattern (although this time the piece size is different):

[storagenode_logs]grep "R4IBKJKMRYAOQCEBCNENQSNCNXKBGIALSMFUBDS2AGEOIKE2HBYQ\|v0" \\\\marstorage/Storj/storage/node.log
2023-01-30T11:33:21.590Z        INFO    piecestore      upload started  {"Process": "storagenode", "Piece ID": "R4IBKJKMRYAOQCEBCNENQSNCNXKBGIALSMFUBDS2AGEOIKE2HBYQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Available Space": 1819046414828}
2023-01-30T11:33:21.881Z        INFO    piecestore      uploaded        {"Process": "storagenode", "Piece ID": "R4IBKJKMRYAOQCEBCNENQSNCNXKBGIALSMFUBDS2AGEOIKE2HBYQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Size": 15104}
2023-01-30T11:34:15.393Z        INFO    piecedeleter    delete piece sent to trash      {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "R4IBKJKMRYAOQCEBCNENQSNCNXKBGIALSMFUBDS2AGEOIKE2HBYQ"}
2023-01-30T11:34:15.731Z        WARN    pieces  failed to migrate v0 piece. Piece may not be recoverable        {"Process": "storagenode"}
2023-01-30T11:34:15.732Z        ERROR   piecedeleter    could not send delete piece to trash    {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "R4IBKJKMRYAOQCEBCNENQSNCNXKBGIALSMFUBDS2AGEOIKE2HBYQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storage/filestore.(*blobStore).Stat:102\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:245\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:290\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:369\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

And again the same warning immedeately after the error.

Brgds, Marc

Hey Marc,

The log line you point out appearing when you get “could not send delete piece to trash” (“pieces failed to migrate v0 piece”) occurs as part of the “trash” attempt, so it makes sense that it would appear at the same time. Link to log in code.

Basically, it will attempt to delete the piece, but if it cannot find it, before completing, it will attempt to migrate a piece with version v0 to v1, with the same piece ID, in case that exists.

All this suggests to me is that for the pieces you see these logs for, you did not have the piece stored at the time, either as a v1 piece, or a v0 piece.

Let me know if you have further questions.

3 Likes

Thanks Moby,

Thanks for the explanation.

Luckily the file not found error only happened four times, and only with a deletions. I do not think this is caused by an issue on my system as I would then expect errors with downloads and audits as well. Hopefully this is some kind of race situation in the storage node logic.

Brgds, Marc

1 Like

It may be possible to get multiple delete requests for the same piece. I’m not an expert on deletes, so I’m not 100% sure about the likelihood, but perhaps you already received delete requests for these pieces that succeeded.
Anyways, as long as you are not seeing “not found” errors as a response to audit or download requests, you should be in the clear.

Hi Moby,

I am 100% sure these pieces were not deleted multiple times. It must be something else.

Brgds, Marc

Thank you for the clarification. I will see if I can understand what could cause this behavior in that case.

Hey Marc,

Do these piece IDs appear in your logs anywhere other than what you’ve shared?

Hi Moby,

Nope. I did a grep on all the logs I have.

Brgds, Marc