Just in time! (Disk fail)

Bought a cheap WD 4TB External drive last week and been prepping it to take over from the current 3Tb drive after it started to get some issues “reallocating 124 blocks!”

After a few online rsyncs and 2 final offline rsyncs I see the amount of damage those blocks have caused! 5 shards are unreadable from my node!

rsync: read errors mapping "/node/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/dj/nkenrkuxmzl7upu7kzctkwvhhwxk6px3t7jyxfgo6c6i5mlmtq.sj1": Input/output error (5)

ERROR: node/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/dj/nkenrkuxmzl7upu7kzctkwvhhwxk6px3t7jyxfgo6c6i5mlmtq.sj1 failed verification – update discarded.


Lucky it’s only 5 lost shards rather than a full node!

SO P.S.A. time :
Do your monthly housekeeping on your drive!
Don’t ignore your drives S.M.A.R.T. info!


Correct me when im wrong. But you said you lost these 5 Data Pieces. But that does mean that you will start failing audits for these Data and this can cause Disqualification.

5 data pieces (even if they all get audited within the next days) wouldn’t bring your audit score below 0.6 so he will be fine.

1 Like

What does Storj satellite do when an audit like that is failed? Does it assume data is lost and maybe repair to some other storage node, or does it assume that the failing node will recover it somehow?

It assumes the piece is gone and if not enough such pieces are available, it will trigger a repair to a different node.
It never expects that a client will somehow recover a lost piece.

1 Like

It actually doesn’t. The audit system is meant to test reliability of nodes, not availability of pieces. If you fail an audit, the piece is as far as I understood not marked as lost. The repair system has enough slack built in to still be able to repair even if a few nodes have lost a piece.

The audit process will only ever audit a tiny fraction of the total pieces on your node. If only 5 are lost, chances are you will never fail an audit.


Well!! It only took around 26 days but now…

I’m looking for another drive…
No the 4Tb didn’t fail, it’s just full. Storj used up that extra 1Tb in 26 days!

Thinking of starting a second node if I can save up enough in these times to get an other 4Tb drive.

But I’m just hoping Stefan’s data is removed quickly once his node is officially removed that will get me back a good Tb or so of space :smiley: (hopefully!)