working correctly as getting lost segments on US1?

I have an endpoint which polls and pipes the info through elastic stack.

It only polls 2 times a day, and it’s just started alerting on…

storage_remote_segments_lost":54 for US1

Not the end of the world, but wanted to check to see if this is a test / simulation and something broken in the API server ? assuming you forcing a failure, as lots of hosted nodes been up and down ?

As no one else reported it, probably something broken my end :thinking:

#Edit as another post has been linked to this one - just to be clear, Im expecting the issue to be in the repair workers, NO SEGMENTS have been lost from my view - the Json variable name is misleading “segments_lost” should be “segments_we_gave_up_looking_for” - I don’t expect any files to have been lost, just needing to force the repair worker not to give up so quickly, which given the millions of dB rows to go through is going to take some time to debug…




We are investigating the issue. We will post an update when we have more info.



1 Like

I fixed my nodes after a power failure, and I see zero lost segments in Grafana now. If it was my fault, sorry!

If some segments’ integrity rely on a single SNO, then we have a problem…


For the record, I was joking. My nodes indeed had a power failure few days ago which I only could resolve yesterday, but I do not believe it was enough to trigger a real lost segment, not with the redundancy that Storj provides.