Stop node whilst array is rebuilt or risk disqualification?

The loss of data and the corruption I’ve experienced had the expected effect on my audit score and it’s been fluctuating up and down but seems to have stabilised:

Next question… I was in the process of shrinking my node to recover some drive space, however would it make sense to enable ingress again? Or is the (apparent) loss low enough that it wouldn’t make a difference?

It depends on how large your node is and whether you can let it grow significantly. More new good data would lower the percentage of bad data, which could make survival more likely. But if it’s already a 10+TB node, chances of that helping a lot are fairly low.

1 Like

There’s 17.3TB of used space at the moment, so I think I’ll leave it to continue shrinking. Once europe-north-1 and us2 are shut down I’ll then reassess.


3 months later…

There was definitely something fundamentally wrong with my underlying storage hardware. The Adaptec RAID corrupted again and I was only notified by the Storj email telling me that my node had been disqualified on the EU1 satellite.


I immediately shut down the node and started another chkdsk, which apparently ran for longer than the world has existed (actually 19 days):


My previous recovery attempts were enough to be paid for 3 extra months along with the held back amount for US2 and Europe-North-1, however I’m back to where I began this thread with a folder full of recovered directories :slightly_frowning_face:


I think it’s now time to retire this node as I don’t have the inclination to perform the manual recovery again, knowing that the hardware continues to struggle.

You may try to use this approach:

However, seems either this controller is misbehave, or you do have disks with a higher bitrot rate. Since it’s a hardware controller, it doesn’t able to recover corrupted data and you see a result.
The best case for this server seems to use disks separately, in this case you would lose only part of the common data, not the whole.
The other options are:

  • use Linux or freeBSD (TrueNAS maybe?) and zfs, the adapter need to be switched to handle disks separately, not as RAID, or remove it and connect to the motherboard directly (I do not know, what’s better);
  • upgrade to Windows 2016/2019/2022 if possible and use ReFS, but again, disks should be exposed to OS separately. This option is less robust, but maybe ReFS is ready for production this time.