Node desqualified and planned downtime

Hello everyone!

I have two questions that i would like to ask:

I have a node for some time and since it is almost full I decided to get a new one running. In the process I mistakenly pointed it to the same identify of the first node and it run for a couple of minutes before I removed the docker container. As a result (?) in a couple of hours my first node was disqualified on one satellite. Audit on that satellite went from 100% to 54%. How can I check exactly what was the cause of the failing audits and disqualification in case it was not related to my error while running a second node?

Second question is about uptime. I will have to move my hardware to a new data center soon and I fear that it will take more than 5 hours. There’s several posts saying that at the moment that wont be an issue but I would like to ask if that’s still the case.

Thank you!

Hello @lordsam and welcome to the forum.

Sorry to hear that :confused:
I’m surprised running a node just for 2 minutes with the wrong identity can disqualify it though. Having a node configured with the wrong identity can disqualify it fast, but not that fast I believe.

You should check your logs:

And search for lines that contain both “audit” (or “repair”) and “failed”.
On linux this can be achieved like so:

docker logs YOUR_NODE_NAME 2>&1 | grep -E "GET_AUDIT|GET_REPAIR" | grep failed

You have way more than 5 hours before getting suspended:

2 Likes

Thank you for your reply!

When I said a couple of minutes I did not mean exactly 2. Looking at logs it was around 10 minutes, but I run/delete several times so I can not be sure.

Anyway, grep’ing the first node logs I don’t have anything matching “audit” (or “repair”) and “failed”.

Anything more I can check? I just want to be sure this was not caused by something else that I did not notice.

I would expect the second node, with the same identity but no data, would be the one failing audits. Please check that log if you still have it.

1 Like

Apparently those logs are removed when the container is destroyed.

Yes that’s how docker works unfortunately :confused: