My node change after update to 1.67.3, satelites are with red flag but 60+ online score


I have issues with the 1.66.1 shows misconfigured so I update today and the satelites show this any suggestion?

Hello @Taconode,
Welcome to the forum!

Yes we mistakenly re-used color code for danger levels of audit score (where 95% mean 5% data is lost and your node is disqualified) for the other scores, which not follows the same levels (they remain 60% for suspension and 0% for disqualification).

You just need to keep your node online, to fully recover it should be online for the next 30 days.
Please note, each downtime will require an additional 30 days online to recover.

1 Like

The suspension score of the near satellite is going down, this is the log I see a error but dont kwon a way to resolve it, any idea?
2022-12-02T02:33:58.896-0600 ERROR piecestore:cache error getting current used space: {“error”: “context canceled; context canceled; context canceled; context canceled; context canceled; context canceled”, “errorVerbose”: “group:\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled\n— context canceled”}
2022-12-02T02:33:58.920-0600 ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:153\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2022-12-02T02:33:58.920-0600 ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:153\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2022-12-02T02:33:58.920-0600 ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:153\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2022-12-02T02:33:58.920-0600 ERROR pieces:trash emptying trash failed {“error”: “pieces error: filestore error: context canceled”, “errorVerbose”: “pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:153\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:377\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1:51\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2022-12-02T02:34:04.958-0600 INFO Configuration loaded {“Location”: “C:\Program Files\Storj\Storage Node\config.yaml”}
2022-12-02T02:34:04.960-0600 INFO Anonymized tracing enabled
2022-12-02T02:34:04.961-0600 WARN Operator email address isn’t specified.

Your disk is unable to keep-up. Is it SMR?

1 Like

Is a Barracuda st4000dm004-2cv104 of 4 tb, it have SMR

Then it’s a reason. SMR disks are known as slow and they not recommended for storagenode.
I could suggest to replace it to internal CMR disk if possible. For the external disk you need an additional external power supply, otherwise they will shutdown itself from time to time due to lack of power or overheat of USB controller.

There is no good solution for SMR disk so far, except run the second node with own disk in the same /24 subnet of Public IPs - they will share an ingress and your SMR disk will have a room to breath.

1 Like

OK understand, thanks for help, I will try to keep running node while I get a CMR disk