Ap1.storj.io having issues?

Last night I was suspended by ap1.storj.io with a suspension score of 54%… all other satellites remain at 100% leading me to believe there may be something else going on.

A few months back several operators were “suspended” by a few sat’s for just a few hours because there was something else going on…
Is “something else” going on that would cause me to be suspended? I don’t see any egregious repairs or anything from ap1.storj.io that are alarming to me.

Thanks

Thanks, ive been suspended for a few hours on ap1… i chimed in on your thread as well. im only seeing peices getting downloaded, nothing repaired, noting erroring, etc… my logs look a lot like yours…

this is just frustrating because we spend hours and dollars making sure our nodes remain online, and to get an out of the blue “you are suspended” email/notification with no cause is just infuriating…

1 Like

Same as well here. first time ever I get suspended by a satellite. At the time of writing this still suspended with 49%.

Not sure what happened

I also noticed that since 10AM EST, my download bandwidth tripled compared to usual.

Jumping in as another that is having suspension issues on ap1.storj.io. When I checked my nodes this morning I found one with both ap1 and europe north suspended, one with ap1 suspended, and one that was fine. At the same time I noticed 1.52.2 was now available for 32 bit arm and updated that. My first node is now not showing suspended (99.27 and 100% suspension status on ap1 and eu north) and my 3rd still shows suspended on ap1 at 58.46%. Unfortunately the logs only go back about 4 hours to when I updated the docker image and I’ve got nothing in the problem node logs for:
docker logs storagenode3 2>&1 | grep -E "GET_AUDIT|GET_REPAIR" | grep failed

EDIT: aaaaaaaand after posting I refreshed again, my “problem” node is now not showing suspended and at 100%.

Of course, you are re-created the container and old logs got deleted with the old container, the fresh one doesn’t contain past failures.

Yeah, that was the unfortunate part I cited as when I saw the new docker image was available I jumped at updating as I thought it was possibly related.

So the sudden increase in (temporary) suspensions on the ap1 satellite were due to that satellite now having an increased repair threshold and nodes not being able to keep up, thus suspension strikes on the nodes until they could get in sync?

It could be not only AP1, it can be any satellite.