hello… question its normal droping down audit %?
on this sattelite
118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW
but on all another sattelite are 100% audit…
hello… question its normal droping down audit %?
on this sattelite
118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW
but on all another sattelite are 100% audit…
There is a bug where a satellite deletes a piece and then later tries to audit it.
The devs know about it and are fixing it. I just hope that nobody gets disqualified because of this.
its not a bug. its critical problem…
thx for info Pentium100
It’s a bug… We’re still in BETA.
And if a node gets disqualified because of it, you can always write to the support and they’ll resolve it.
Yea, they did not do mass deletions before, just wiped the network. And now with mass deletions it turns out that something does not always get updated when a piece is deleted, so the satellite tries to delete the piece again (which produces an error, but does not affect the reputation) or tries to audit it.
I got DQ’d on one Sat because of it. Awaiting support to resolve / unpause. It’s interesting that it was a very inactive satellite. So i guess a ‘few’ failures results in a high percentage of audit fails, which caused it to fail sooner. I’m chugging along on 118 just fine for 4 months after the other Sat DQ.
for me 2 sattelites fill up hdd. others 2 where are 100% audit but 0 usage .
thats why im worried about that sattelite… im loosing audit… and usage…
therefore from time to time I write the log storagenode to a file so that I have proof of the complaint …
eg.
docker logs storagenode >& /path/to/file/2409full.txt
I recommend this method instead.
https://documentation.storj.io/resources/frequently-asked-questions#how-do-i-redirect-my-logs-to-a-file