Collector unable to delete piece

The storagenode.log file on my node is being written to 10,000 times a second and has begun has been filling up my entire disk (100GB+) with "collector unable to delete piece" warnings. I have been a node operator for 5 years and haven’t encountered any issues until now, any advice would be appreciated as I’m currently unable to keep the node alive

2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "E3THDH42KS22DOG5KV5OLV3SUDHTM4AG76TIW3S5NGYO3YGW3K7Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "GU6HHDVZ3PJ6NVNDX6IL545SPE23CSSE4AXSHKNWTI24EOBGZI3Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "DXHIKJT744UN6QHKCJDVAANC624NEZKSYDY5DZYXC44NTTMR4CEA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "QJSJGFOEJVMBTGM2PAW7UBGK2TCL63OFA6VM6YZBRIWW26EKYMQA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "AJUYWM54I3NI4XITJVTY6UGRVC4FHCNBCQD5QMJUQYOK3EILLWOA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "UDEPDLMA3QLPSI52EXPSQ6MCCE6G7UZR5OU44WVUEVYVAWML5SEA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "NW35EM6L6KUVIQ54ETTDILGHDUV5QERAC7GGNHSB5IKRJH7XY6QA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SSZOZ2GEYGIW66TXWGJRR32QGTLJYMZF4IVDWXZWUCJBGQON3P6A", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "OJWVSQ2S52TZVEZQB6YWE37N4M5JLAPI2NX7RJWN26ZY3BCMGRRA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "CIFV2AXX4NLK4O2FNYVOHEJ5NAW5TYGTRFDD4V44VEECZZGXZ6FQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-17T13:43:19+01:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "SMTGBSNEDHTYY7WYRZLSQC3OUH6B5S2N7NDZUJBAEUSR3BJWKBSQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

1 Like

There are a couple older threads about this: and it sounds like some have had success creating/touching the .sj1’s it’s looking for. But I’d just stop that node, wipe its DB directory, and start it again. Your UI stats will be wrong until next month: no big deal

Instead of giving it what it wants… make it forget it wanted anything in the first place :wink:

I think it’s a new / old bug:

Should be fixed soon hopefully.

1 Like

It’s not a bug, it’s a normal behavior, if the customer deleted the TTL data before its expiration, so the garbage collector already moved these pieces to the trash, thus TTL collector complains.
You may ignore this WARN. Or you may configure the custom log level for this chore, see --log.custom-level.
You may also temporary change the log level via debug port: Guide to debug my storage node, uplink, s3 gateway, satellite.