Got a disaster on 03SEP (unrecoverable data loss)


I’m running a storj node on a qnap since nov-2021, it was finely running and had now about 3Tb data.

Unfortunately the 03 sep I got a disaster on my NAS and lost the complete filesystem holding storj datas … (basically a long power shortcut while sleeping, the nas was improperly shut down as the battery went off before auto-shutdown. Then my nas was bugged and loop-booting, once i finally went in degraded mode and stop this horrible loop I noticed that my raid volume was corrupt beyond repair … game over).

I’m still working on recovering the few datas I can on the nas … For storj I migrated to a computer with single disk for now (and I may consider sticking to this with possibly a backup job as I thought this kind of issue never happen on raid : i was wrong).

Of course I had to recreate the node from scratch (only with my identity) and I’m afraid I will get lots of failed audit on the next weeks as I’ve lost 3 Tb of datas …

What is going to happen to me regarding suspension ? Will I’ll be definitly banned from the network ? Are they any solution ?

Best regards,
Olivier, France.

If you have lost all the data, then you need to restart with a new identity. The old identity will be disqualified because so much data was lost, so adding new data right now on the “same” node won’t work anyways.

I feel for you. I lost a node a few days ago because I resized a partition wrong by mistake. This time I lost my identity files because they were overwritten. 500G of node data was intact, but no identity file so I couldn’t start the node… sad.

It feels like I lost a good pair of gloves, i mean probably no big deal but still… hmmmf.


Thanks for your feedback.

As expected i got disqualified a few hours later. I created a new identity to restart a node from scratch.

I used the same mail address, I hope this is not a problem, btw I’m also wondering if running several node instead of a big one is a better approach in case this disaster happen again.

Many thanks, Olivier

Several nodes would work better if they can fail independently from each other, ie being installed on different HDDs.

This will also provide a little bit of protection from the usual big failure reason, the sysadmin :slight_smile: