1.107.3 Error: collector unable to delete pieces

All my nodes have started showing this warn (endless messages) as soon as they were updated to the new version.
All of them have 100% audit score, i’ve never seen this before.
Anyone else have this messages ?

2024-07-03T22:06:27Z WARN collector unable to delete piece {“Process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “FFUGWPHFGJKKT56BFEW65U57OTPZOKKF5B47ID4IGROMTFQYRWPQ”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:270\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:250\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:355\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:575\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}
2024-07-03T22:06:28Z WARN collector unable to delete piece {“Process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “4NZIKENI4SB2WWIKYVHQINKG5SFFX65VTBBOMP5QFZZYEHF7WFFQ”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:270\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:250\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:355\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:575\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}
2024-07-03T22:06:28Z WARN collector unable to delete piece {“Process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “2CG4VQEH6R5QVG3Z5Y6R3NIY2HRWL5RK35QRN426DBCOTXMU5WFA”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:270\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:250\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:355\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:575\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

This warning means that the garbage collector removed the data with a TTL before it expired, so now when the TTL collector tries to remove it, it can’t find it.

The customer may delete their data before the expiration, if they wish or if they for example rewrote this data before the expiration.

Thanks, I got the same errors in this version.
Do I understand you correctly that over time it will cleaned up? And thus the WARN is not an issue?
If so, I would think that filling the log files with these not-really-errors would be a bit overkill?

Yes, it should cleanup them and these WARNs are not an issue in this exact case.
The code is universal, so it should print a WARN if the piece is missing, because it’s not expected.

1 Like

The downside of this is that the log gets flooded with these messages at a rate of thousands per minute, forcing you to set that process to Error, preventing you from seeing the rest of the WARN events and making you resort to grepping through a log of enormous size.

1 Like

You might want to set your log level in config.yaml to error / fatal

log.level: fatal

1 Like

Or use a logrotate

etc.

Yep, I got these too after updating to 107 version.
I will use this docker run params:

--name storagenode storjlabs/storagenode:latest \
--log.level=info \
--log.custom-level=piecestore=FATAL,collector=FATAL \
2 Likes

The collector doesn’t spam too much, in my opinion though:

2024-07-07T10:38:02Z    INFO    collector       collect {"Process": "storagenode", "count": 1}
2024-07-07T13:39:47Z    INFO    collector       collect {"Process": "storagenode", "count": 2992}
2024-07-07T14:40:42Z    INFO    collector       collect {"Process": "storagenode", "count": 5973}
2024-07-07T16:38:02Z    INFO    collector       collect {"Process": "storagenode", "count": 1}
2024-07-07T21:38:02Z    INFO    collector       collect {"Process": "storagenode", "count": 1}

Ooh this seems like a really good idea to reduce the piecestore log spam… how do I translate it to a docker compose file?

Search the thread about custom log level.

The 107 version gives millions of lines of “WARN … collector…” and there were some “ERROR … collector…” too. Reading about both entries that are not something to care about, I choose FATAL to get rid of them.
Maybe you can use ERROR level, in case something useful comes up. It’s up to you.

It’s not related to this version unfortunately

But, yes, you always can reduce the log level either by the option or dynamically

Can’t we just change this behaviour, like at deletion with bloom filter also perform a batch delete every N deleted files in the TTL database? Is actually quite annoying, to get flooded with those warnings at every collector run.

on all my nodes ia have these errors

2024-07-30T21:36:29+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: RARGNRBJPWRWHHWST43ZYNEMXI6VBRECQWBR77J2P5YDEFNNTMTQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-30T21:36:30+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: UMH5FAWBIHRQBYHZ2LXC4BPG3HL3GSIVIJVMW7JRVB5P7WW5GBRQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-30T21:36:32+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: ADTHC2MYOIFT2KMXJM76D5V7HFMY2NIORFOR6AICXBRZHRD76F3Q, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-30T21:36:32+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: HBL5LCD36JBRPQNYXVWC6CWQWNLTEVWPWQCV4BJYE3EWKKEGGHCA, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-30T21:36:34+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: VFHUSMHX64BZVYJYTFW7I7AYCWJKMLG7QTRJSRKPTWEYRJVSLIPA, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}

It’s not normal for me

Sorry I just read this post

and now everything is clear.

1 Like

Unfortunately not, otherwise you will get a “database is locked”, because there could be a TTL collector running.
I believe It could be possible only for non-lazy filewalker, because there would be a one process, but I’m not sure. We have had “database is locked” issues even when there was no lazy mode at all.

Relise 1.114.6 fixed problem with ingress.

But node Generating
024-10-09T07:58:19+03:00 WARN collector unable to delete piece {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”: “PBPYA36XWUCOVSAOEO37MBSW2JEEQP5L2OBMFGZSBZOO2UYJ67ZQ”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:126\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:350\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:329\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:375\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:111\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:68\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

Several GB in 2 min OS going very fast out of space, some log files go to 80gb then node die.

I would start a new thread to get better visibility, or maybe even GitHub issue. Or even a PR — it’s a one line change.

I thought this issue has been already fixed (or maybe fix was suggested, not sure) but this should not be an error, it probably a warning at best, but like Ly just info will suffice. Not having a file that has been previously deleted is pretty normal occurrence.

I think main problem itself not in this line, more why it genereting Gigabytes of this, bigest log got 80GB