Repair node feature -download missing files

Hi,
I had a HDD failure and I lost few percents of the files. I had replaced my drive a month ago but my storage node audit score is still low because of this.
I would like a command to re-download missing files and delete obsolete files. My storage has 8TB but only few GB are lost.
Thanks

Apart from that: your node might survive losing a few GB from 8TB.

1 Like

Well, the node survived, that’s not the point
In the other thread, he talks about restoring from a backup, that’s nonsense. I make no sense to make backup of a node that makes you few bucks.
I’m thinking about re-downloading the files a perhaps a command to tell the satellite that some files are no longer available.

This has been discussed several times throughout the forum, but in short the answer is that it’s not worth it from a satellite point of view… So I’m not sure we’ll ever see such a feature.

1 Like

restoring data is very expencive process. sattelite need to download pieses, pay for downloading it, them remake file, new 80 parts or how many is needed that it together will be 80 and send it to holders.
and all this trafic has to be payd. I mean not only to SNO but to server owner that also want money for outgoing trafic.

3 Likes

The similar idea is discussed there:

And the result the same - node have no idea what pieces are lost, the same for the satellite. The only way to figure that out is performing a full audit for the same price as recover. But in case of repair the satellite will download only 29 pieces and generates a missed 51 rather download a whole number of pieces in case of a full audit. Just not worth it.
And repaired pieces will never come back to the same node, which managed to lost them. It is already proved its unreliability.

1 Like

When you check the code and see that you can’t find code for blob checksums or even expected size, you come to the same conclusion eventually.
The audit IS the scrub/repair

1 Like

I have 2 bad files according to ZFS that were not repairable on around a 150TB array.

Hoping they are never audited :slight_smile:

Happy new years everyone!

I had a few bad files too and removed them. Recently one of those got audited :smiley: Audit rate dropped a bit but a few hours later it was back at 100%.

1 Like

I agree with @kevink:
Don’t worry about that, 2 files are nothing. The Tardigrade network has been designed to support a few losses.
I lost 2000 files once, but because the node has millions of files, its audit score drops a little from time to time, but it doesn’t get disqualified as chances to audit 10 missing files in a row are almost non-existent.

1 Like

Didn’t realize at first read, but it doesn’t behave this way on my side (it’s way slower): whenever an audit fails, score drops from 100% to 95% approximately.
Then it takes days to get back to 100%, and the closer it gets to 100%, the slower it gets…

And finally, 15 days later or so, it’s back to 100% (thx to roundind - in fact it never reaches a perfect 100% again from an API point of view).

These scores for one of my nodes are still being plotted online if anyone’s interested:

But I guess this highly depends on the amount of data the node stores, so YMMV… This particular node stores 0.6TB only at the moment.

That’s why it took so long in your case. My node has 2TB. After 22h back to 100%. Depends on the amount of audits you get.

1 Like