Your node has been disqualified - 10TB Node

@littleskunk what can i do ? :S

If your node disqualified on all satellites - then only start from scratch - with a new identity, new authorization token and clean storage.
If it does not disqualified on all satellites yet - you can decide to continue run this node and receive payments from the remained satellites.

Have you figured out, why your node lost data?

@Alexey

Yes, it was because the node did not shutdown correctly (-t 300) and then when reconnecting it gave DB errors but until I realized it, a couple of days passed.

Then I repaired the DB and everything started to work fine and some satellites were unlocked but soon they suspended me again, I don’t know why.

Nor does it seem fair to suspend you in node within a few hours of an email saying something is wrong.

Do you know if I lost Total Held Amount ($303) ? . :frowning:

Thanks!

image

image

and they reactivated the node and at the same time (same day) they suspend me again ?, it doesn’t make sense.

image

The disqualification can be issued only if your node failed too much audits and audit score is fall below 0.6
The audit can be failed if your node online, answers on audit requests but does not able to provide a requested piece for audit for any reason - either the piece is lost, or your node can’t read it. Your node will be asked three more times for the same piece. If it’s not provided, the audit considered as failed.

So, the broken DB has nothing to do with losing data. Seems your node not only broke the DB, but managed to lost noticeable amount of customer data as well.

In case of audit failures the disqualification is permanent and can’t be restated, so the held amount for that satellite will be used to recover the lost data.

Let’s try to save your node from disqualification on other satellites. Please, show your docker run command with personal info removed.

2 Likes

I do not think there are missing data and if there is, it must be minimal.

303 dollars to repair the node seems to me to be a lot.

I think in the future there should be an opportunity to pay to repair the damaged parts of a node.

It is sad to see that a node that I have from the alpha breaks down at any moment.

and now that I only have the “Saltlake” (test satellite) what will happen to the 10 TB used? are they going to be eliminated little by little

Has saltlake dropped the suspension? If so, then Saltlake will work like normal. So nothing would happen to that data. Unfortunately the satellite is too new for graceful exit. But I would just keep running this node anyway. Make sure it doesn’t run into issues anymore. You can start another node if you want to also get traffic from other satellites again. Perhaps eventually you can gracefully exit the old node to get rid of the multiple nodes again. But there is very little downside to just running them both for a while. You can have new data only be sent to the new node by lowering the assigned disk space to 500GB on the old node. (I’d wait until the new node is vetted otherwise you’ll just get a lot less data until it is)

Then eventually when the new node is starting to make a good profit you could exit the old one. This way you can have the benefit of keeping what survived, but also restarting on the other satellites.

every storj node upgrade … i recive that msg. my solution is… restart node after recived update… and keep it online for 2h … and after that… stop storj node and run again… and that warning disapear.

I think he may have been asking about the data on his hard drive for satellites that he is now disqualified on. Does the node clean this up automatically or would he need to manually delete specific blobs folders to recover the disk space and continue using the node on the one remaining satellite?

Yeah I’m also wondering about this as well as what if you accidently have extra pieces (say from when sync’ing) - will garbage collection eventually get those?

As long as they are in the right directory yes.

1 Like

while you are here and on that topic, i’m copying my node to sort out some zfs pool configuration issues, wasn’t there something about cleaning forgotten / deleted files that had missed garbage collection or some such thing… a while back

i seem to remember my node using more data than it seems to register, so yeah just wanted to know if i could run something to check for that… to save ½ a TB of space when moving it around…

just wondering… not really critical

Run the GE command but do not select any satellite. GE will print out the used space per satellite. Compare that with your drive. Do you see any other folder that is using a lot of space? I don’t recommend to delete it! This is just a methode to get an idea what you are searching for and then you can follow up and find out what that folder is or was used for.

thx ill try that see if its anything of note.

docker exec -it storagenode /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity

the command for docker looks a big dangerous… but hey atleast it’s copy paste…
and it was a bit unclear on how to exit it… .but seems viable enough even tho a bit scary.
it kinda gave me the sense i was about to GE my node at any time… if its to be used for checking stuff like this… may be it should be a bit more user friendly…

like please enter a space delimited list of satellite domain names you would like to gracefully exit, press enter to continue… it doesn’t really say what happens if you press enter…

apparently it’s nothing, but it could just as well have meant GE from all which is sort of what one gets the sense of in the beginning from agreeing that one is about to GE

but my node is just 3 months so i doubt i could GE if i wanted to :smiley:

my numbers look good… really good…
thanks for your help

The command is not desiged to be used for that. There are other ways to get that information as well. I just took the one that is scary but still simple to execute.

1 Like

okay… consider me shell shocked from the warp speed operation… xD