Audit falling on eu1

Now, the audit is also falling on eu1

doing some scans of my logs right now for
v0pieceinfodb

seems to be one of the more unique pieces of text in the log lines which this new audit issue creates.

the example above is from eu1

should have from a couple more sats not exactly sure which right now…
using my laptop to search and it’s only got 100mbit ethernet… tsk tsk
i should really stop searching like this lol… takes forever

same issue as on ap1… tho quite rare

2021-07-20T14:55:35.912Z ERROR piecedeleter could not send delete piece to trash {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “SZSXFWI3I2Z6YKFKO22HGOH6XG7NT26YPKSNE4DH4TWXCM26WZCA”, “error”: “pieces error: v0pieceinfodb: sql: no rows in result set”, “errorVerbose”: “pieces error: v0pieceinfodb: sql: no rows in result set\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Get:131\n\tstorj.io/storj/storagenode/pieces.(*Store).MigrateV0ToV1:404\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:348\n\tstorj.io/storj/storagenode/pieces.(*Deleter).deleteOrTrash:185\n\tstorj.io/storj/storagenode/pieces.(*Deleter).work:135\n\tstorj.io/storj/storagenode/pieces.(*Deleter).Run.func1:72\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

i checked the limit existing logs i got… this is the only hit i got on this piece, so i cannot provide any more information on it.

noticed this is on the node i lost a couple of files on because of using rsync -aP --delete after having it running for a minute… to fix a bandwidth.db issue after migration, should have used rsync -aP to retain the few new files, when redoing the rsync to fix the corrupt database.

so might not be relevant for network / sat audit issues… but could be related to me being asleep at the wheel…