V58 db migrate fail - piece_expiration.db - 1.108.3

I have the same problem as @nerdatwork. I managed to update without any problems to version v1.108.3 but now the log is growing at a very high rate with thousands of entries with the same warning:

2024-07-26T15:47:48+02:00 WARN collector unable to delete piece {Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Piece ID: EUCLKICTPFIIKQH6EGD2SFWGS3LFBNFGFX3JXEA3R7SS4VMJ3MHA, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-26T15:47:48+02:00 WARN collector unable to delete piece {Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Piece ID: MMD3AZ4P6CDTYG3J2REE3J5IFUQASRT3KPYM737JYGYOPPOWAAAA, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-26T15:47:48+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: KLKMB6IBJ3GS2VKNRTSXTMG3A5JN5ELBBV2D37DXX53FZZIXLXGQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-26T15:47:48+02:00 WARN collector unable to delete piece {Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Piece ID: 7FQKCYKMDKAAY62FRPUR2LMZ5FWCK77BHWVB2MZDQFRDUQVBRWWQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}

Should we worry? Or will it be solved once the filewalker has finished? (I’m scared of the GB the log will take up)

I counted the number of file not found entries and it has exceeded 2.1 million in a 2.8 GB log file.

It looks like they have made a fix for one of those issues. Just needs to be released. storagenode/{collector,pieces}: batch up collection of expired pieces Ā· storj/storj@b8c9925 Ā· GitHub

5 Likes

I went to look at the youngest node I have (1 month) and it updated to version v1.108.3, did the same, filled the log quite a bit with the same warning, but finally finished with the entry:

2024-07-26T16:12:56+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: EHR5D5TFX6GDABRSLB4XTMYR5DQ6ZI4H7JBB7ZLDP3Z4UP3FV3PA, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-26T16:12:56+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: YP5OQRFTOKSI6LF4OLX5V3ESKKFTWCR3NV6SNJMGWQ3T7GHTEHHA, error: pieces error: filestore error: file does not exist, errorVerbose: pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78}
2024-07-26T16:12:56+02:00 WARN collector unable to delete piece {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Piece ID: BK2UOKRD363RH35CD3VYWFITEMHBCYFOCZVFKVHH7WQEQSGKZDCQ, error: pieces error: filestore error: file does not exist, errorVerbose: pieces er

2024-07-26T16:13:54+02:00 INFO collector collect {ā€œcountā€: 5029}

The previous node is 40 months old, I think it will take a long time to finish… I’m going to stop the node and change the log level to ā€œerrorā€

1 Like

You can set custom log level too

1 Like

It was fixed by its own? Fffff, so what to do? shall i restore backup piece_expiration.db to a v58 error, and let it loop till it fix itself? noo it wont recover from malformed by itself!

My database was stuck at v59 and it showed "malformed’. I fixed it using the guide but the node still wont work. I then edited the db for v59 and it went with migration smoothly. Now I am stuck with file not found log lines. I have set custom log level for collector as a workaround.

You have to fix the malformed database first.

1 Like

You may ignore them, this piece seems already collected by a GC and moved to the trash.

Yes as the commit correctly suggested its going through a loop. My main concern was log growing at 150MB per hour even when I had stopped the ingress.

Edit:
Just a glimpse of the nightmare.

60 lines of entries created in ONE SECOND

(Since there is a 32k character limit per post)

You can limit the log on the fly via the debug port.

[Requesting help]
How did You fixed that malformed database?
Official storj guide did not help, produced different error with ā€œmigrate: v26: duplicate column name: trash\n\tstorj.io/(…)ā€ as my original post says,
in what program You edited that .db?

Eighter the guide need update or i don’t know, im willing to help figure out, if You r willing help me Storj!

You wanna see this .db? shall i upload it?
@Alexey @bre

Edit:
i oppened with Downloads - DB Browser for SQLite
and i see orginal ā€œpiece_expiration.dbā€ (~500MB) has (2) tables
while after offcial storj’s repair guide it has 1 table (and ~154MB)
the table: ā€œversionsā€ is missing, not sure if its intended

Yes please. I will do my best. Just for the record, the worst thing is that you may have to create a new piece_expiration.db in case I am unable to fix it :crossed_fingers:

Below is how it showed when I was able to fix the db.

2024-07-25T13:56:06Z	INFO	db.migration.26	Add Trash column to pieceExpirationDB	{"Process": "storagenode"}
2024-07-25T13:56:12Z	INFO	db.migration.58	Remove unused trash column	{"Process": "storagenode"}
2024-07-25T13:57:27Z	INFO	db.migration.59	Remove unused deletion_failed_at column	{"Process": "storagenode"}
2024-07-25T13:58:28Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "storage2.piece-scan-on-startup"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "storage.allocated-bandwidth"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "storage2.database-dir"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "kademlia.operator.email"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "storage2.orders.sender-dial-timeout"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "server.private-address"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "storage.allocated-disk-space"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "pieces.enable-lazy-filewalker"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "kademlia.external-address"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "kademlia.operator.wallet"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "retain.concurrrency"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "storage2.trust.exclusions"}
2024-07-25T13:58:28Z	INFO	Invalid configuration file key	{"Process": "storagenode", "Key": "server.address"}
2024-07-25T13:58:28Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-07-25T13:58:28Z	INFO	Identity loaded.	{"Process": "storagenode", "Node ID": "*****"}
2024-07-25T13:59:08Z	INFO	db.migration.60	Overhaul piece_expirations	{"Process": "storagenode"}
2024-07-25T14:02:24Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 60}

Edit:

DM me the link.

This is mean that the versions table was not dumped into SQL, seems the source database is more damaged than usual.
You may try to create the versions table:

PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE versions (version int, commited_at text);
INSERT INTO versions VALUES(26,'2021-09-24 21:03:01.8476975 +0000 UTC m=+6.957140901');
COMMIT;

current ongoing issue , (maybe a way to reproduce the error)

do a proper ā€œdocker stopā€ to stop the node → the database seem to be succesfully closed with no error ( no -shm or -wal files in database folder)

sqlite3 piece_expiration.db ā€œPRAGMA integrity_check;ā€

report no error either

then ā€œdocker rmā€ the node , then launch it again with the same command line as usual ( as i have always used and working fine until now)

then i have the database error in an infinite loop in the log file and the node is never restarting
have to put a new empty piece_expiration.db each time

(tested on 3 different node that had uptime around 56hours)

This is not the node itself, but Windows does it when storagenode installed as a Windows service (aka windows GUI)
By default, Window periodically tries to restart the service with the startup type ā€œautoā€ if it detects that service is not working.

A rather annoying thing, but easily fixable.

Just for the duration of long maintenance (such as database repairs, for example) it is necessary not only to stop the service, but also temporarily switch the startup type from ā€œautoā€ to ā€œdemandā€ or ā€œdisabledā€.
And at the end of the maintenance , return it back to ā€œautoā€ or ā€œdelayed autoā€. This can be done both through the GUI control panel and from the command line (elevated/run as Administrator):
for complete storagenode shutdown:

sc config storagenode start= disabled
sc stop storagenode
sc config storagenode-updater start= disabled
sc stop storagenode-updater

Restoration after maintenance:

sc config storagenode start= delayed-auto
sc start storagenode
sc config storagenode-updater start= delayed-auto
sc start storagenode-updater
1 Like

Yes, this in itself does not cause problems, but it may be a SIGN of some serious problems elsewhere.

Because I also observe a very large number of such records on node 1.108.3.
And there are two alarming facts here:
1 - the TTL collector is working normally on this node (it has successfully processed all records with an expiration date up to -1 hour from the current moment). So it can’t be the pieces that have already been deleted by the garbage collector - it just didn’t have time to get to them.

2 - These records are not for SLC (where there was a known problem with overwriting TTL data), but they are all for US1 satellite in my case:

2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "G3JZRTIJAPEHJ67GI6RFHTWJTRT5FWTC7NTVDZGJXANP7ZN542AA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "HMUVP7STDB5HBI4YSGGWFQAOHXDA3ZHFJKZDHSGQJEPCHGQ3H4BQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "GREEHXIWVIPJ6GHCGNQH7LGWJV7M2WOU4CYCULJHKB7H3PYUVCQA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "MYURKVYPOOPYEPKQSSMD5MCWVF45PSVFWYX4NAXVMZUSYIBAFC6A", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "F36OWUULCZENSFMCPG2YSPBWAPNZFGM5X5D7H5P5VNWNWRWBVRAA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "PBOF7BMEEJ227KBHHO6D2YJQCIPPXJ54TSTT2I6K5Q4LRWR2A2HA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "G7V4ZV3IF7DQOGRJZ3DU4B3V7OGTQBTU3FCMRSLM7Y62PXE7YO6Q", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "GHM3VMG4WJT3SL3IDE7HACR5YQGO3WOA7XNSB4TCXLEQNEVHL2BA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "KDUQ6ZN6E5RHIIP4CCCJP63CX7V6F5OAWYNE76HE33RRH2XWOEOA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "AI5DLVBHNQD2G64ZVW55YIMPCQA3HUNFY7ZPQ3ISLV2GRJZPWYUA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "DCZNHDTT3CDYYLN4HJXO6IJPWL2MYVAQXIFYK6IKZKCV5P7K37WQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "DYMSJJS3WU46ATITHBIP4XLK54AE4BQ5BOL7545JDVN3TAHTNIPQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "O5AQST7VHYVGZS6F3YJO4WPUEYEVCYEFYQJEYXDS4PVTTIGQIUKQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "HG5CKOD4X46FRMHB7HURXU4P4A33BVCHBQFEN46EF5HZHLME7DRQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "IXS3Q6PQH2STTZZE3WQN5DME2P5Z3A3DMZTFFEQUQ6IV4YG5W5BQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "K6YPXS5O7FRRXKMHB46J7M6FEZRWOUAO74CEM5OUES34NDZOLIXQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "BGRU334U2CV3XJYYWAGVUVKS3OO3UF5TGOY22KYLK2LSFGURQIBA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "ARDKLODC6HDDW7NDGYEQ44555YG3ONPOENE4FBM64XPVRETHIWFQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "OZ5JFDOVKXJXWKKS3JFPOVBUO3SV4PCUVJKZIM3FJ3NUV6OUXCYA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "DVPZZGUCYVEK2HK4LWK3ISFT4D2UXV6CGW7BDCCIZROUI6SA3SUQ", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "K6ACVFVDC5E3NI7D5745CVNQLGXVT43J4FBZUTPJ6GEKL2PL5JMA", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-07-28T05:56:31+03:00	WARN	collector	unable to delete piece	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Piece ID": "PUQXNKTGW2FP4JLDASNZYNZ3N62I42GM6XNNEPF2PP4IMXG4CD5A", "error": "pieces error: filestore error: file does not exist", "errorVerbose": "pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/filestore.(*blobStore).Stat:122\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:290\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).DeleteWithStorageFormat:270\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteSkipV0:356\n\tstorj.io/storj/storagenode/collector.(*Service).Collect.func1:88\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:71\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:87\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/storj/storagenode/collector.(*Service).Run:61\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:44\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
--cut-- (up to ~thousand per each collector run every hour)

3 - Of course, I did not delete any files from /blobs/ manually. And the node still has 100% audit score, which confirms that there is no data corruption on the disk/file system level.

So it’s completely unclear where else thousands of collector errors can come from with attempts to delete non-existent files.
There may be another bug that appeared between storagenode versions 106-108.3. Because all this started only after updating this node from version 1.105.4 to version 1.108.3.

Did you take a look at the comment in this commit ?

If the garbage collector has been moved the piece to the trash, it will not be found by the collector.
It’s pretty easy to check, search for the piece ID in the trash (remove 2 first characters):

find /mnt/storj/storagenode/storage/trash -type f -iname JZRTIJAPEHJ67GI6RFHTWJTRT5FWTC7NTVDZGJXANP7ZN542AA.sj1

You may also limit to the subfolder of each satellite (there you can see the names of the folders: Satellite info (Address, ID, Blobs folder, Hex)).

But likely @nerdatwork is correct and the collector didn’t remove the info from the database if it was interrupted during the process. With the batching it should delete it now.

1 Like

Yes, I even participated in debug and testing which lead to this patch.
But it is a different issue which is observed ONLY when the TTL collector cannot cope with its work and begins to ā€œrun out of scheduleā€ (lagging behind) on larger nodes with a LOT(like million or more) of already expired pieces in DB.

And I specifically noted in the first paragraph that this does not apply to this particular node - the TTL collector on it works on schedule and successfully completes its work every hour, as it should:

2024-07-27T20:57:06+03:00	INFO	collector	collect	{"count": 1943}
2024-07-27T21:57:08+03:00	INFO	collector	collect	{"count": 2126}
2024-07-27T22:56:49+03:00	INFO	collector	collect	{"count": 1339}
2024-07-27T23:56:54+03:00	INFO	collector	collect	{"count": 1218}
2024-07-28T00:56:41+03:00	INFO	collector	collect	{"count": 712}
2024-07-28T01:56:40+03:00	INFO	collector	collect	{"count": 770}
2024-07-28T02:56:33+03:00	INFO	collector	collect	{"count": 10}
2024-07-28T03:56:31+03:00	INFO	collector	collect	{"count": 7}
2024-07-28T04:56:30+03:00	INFO	collector	collect	{"count": 7}
2024-07-28T05:56:34+03:00	INFO	collector	collect	{"count": 661}
2024-07-28T06:56:39+03:00	INFO	collector	collect	{"count": 1113}
2024-07-28T07:57:03+03:00	INFO	collector	collect	{"count": 1128}

Probably due to the fact that this is a small node (2.5 TB / 11M files) and the fact that it GE from SLC last year. So the number of pieces with TTL data is small (about 200k records in piece_expiration.db total) and it does not cause any problems in terms of performance when processing them.
Also there were no restarts for last few days or other interruptions of work.

But still throws thousands of warnings on continued tries to delete non existent files.

Yes, and now it should delete these records from the TTL database to do not do it again on the next run.
But I would still suggest confirming the guess that GC moved them to the trash.