Raid 5 failed. 3TB node dead. Recovered partially and going through GE thanks to forum

I was making close to 30$ / month (surges included)

1 Like

yeah might be rightā€¦ i just remember doing to math when looking at the ā€¦ very inaccurate storj earning calculator.

still think GE is a long shot for most nodes performing poorly, ofc some of the restrictions might not be fully implemented yet.

if memory serve you are allow 1hour of dt during a month when on GEā€¦ and much easier to get DQ and downloads arenā€™t paidā€¦ so all in allā€¦ doesnā€™t sound that appealing imo

That sounds correct. It was mainly the early months on stefan-benten and the january that built up that much held amount.

Itā€™s the only way to get the money back, so itā€™s worth a shot. If he lost too many pieces, GE wonā€™t work but neither would keeping it running because he might get DQed within 1 week or within a few month. But itā€™s unlikely that his node would last about 10 month if he lost too many pieces for GE to succeed.
Iā€™d rather know now so I can start a new node, rather than getting DQed in a month and losing my held amount anyway.

I canā€™t remember having seen any limitation on downtime during GE, especially because downtime isnā€™t really tracked at the moment.

Yeah, i got about 3k storj with this node, during this time. Canā€™t complain, it paid for its datacenter hosting.
I am trying GE just to push data back into the network, and if it works, then it works. Not trying to game the system. Most certainly Iā€™m not going to run a malfunctioning node just to cover my GE.

yeah but at 30$ a month two months is 60$ vs on GE he might get nothingā€¦ and most likely takes the same amount of timeā€¦ and one will have to remake the node which means a month of vettingā€¦ well i just think in its present form GE is a bit of a joke, atleast if all the feature are implementedā€¦

i remember getting the chills from thinking of trying go pass a GE when i read through the requirementsā€¦

and if the node is a year isnā€™t it at 15 months that one gets 50% of the held amount paid out anywaysā€¦
well lots of math in relation to if its worthwhile or notā€¦ but since i donā€™t expect to use it in its current form because i think GE requirements are ridiculous, then wonā€™t really go check up on it for a good long timeā€¦

hopefully it will be more reasonable at that point in time, when i do feel like using it.

Iā€™m not sure where the joke here isā€¦ if GE works, he gets 200$, if it doesnā€™t work, he needs a new node anyway. Sure, he might get 30-60$ before getting DQed if he is extremely luckyā€¦ But then he would have to set up a new node anyway. And if GE takes a month, heā€™d get 30$ for that month too.
So sure, itā€™s a gamble between 200$ and 30$ but the sooner you know, the sooner you can spin up a new node. (Well you can spin up a new node in any case if you have a spare drive)
In the long run I wouldnā€™t risk running a node that possibly lost or corrupted lots of data. Iā€™d rather get a new and reliable node up as quickly as possible.

I did 2 GEs on stefan-benten, both without a problem. but of course I did not lose filesā€¦

2 Likes

well

because of that the GE requirement has become lower than the 15month 50% withheld amount payout amountā€¦ so because of that one would only get 100$ for a successful graceful exit, and if downtime tracking was implemented i believe the limitation is 1hour of dt a month + the audit failure restrictions before DQ goes up, and you are not paid for the GE traffic, thus might in some cases take away from the node earning while in GE.

so really you should be comparing 100$ + much higher chance of DQ and getting nothing at allā€¦ versus surfing the dying node onward to earn reliable profit putting less stress on the hardware and not limited by GE bandwidth usageā€¦

but yeah presently it might be fineā€¦ but how the final plan looksā€¦ well i know i never will use it if GE will look like that, imo the risk vs reward seems to favor crashing and burningā€¦
but i guess this is another part of the network that isnā€™t finished yetā€¦

Damn, what are the odds of 2 drives failing so close togetherā€¦ Can you post if these were new or used drives? Also what brand and model?

1 Like

Itā€™s RAID5. Itā€™s expected to fail with consumers disks during rebuild:

2 Likes

If they were cheap or Seagate drives, it is expected for them to fail easily. Especially true for drives bought as external HDDs, these are often very bad as manufacturers count on them not being used much and possibly put the worst ones in there.

Do you have more than anecdotal evidence for that claim?

4 failed Seagates, no other brand failed (yet). Also check Backblaze stats.

so, anecdotal?

If I understand it right, they only used HGST, Seagate and Toshiba and no WD in 2019. I wonder why?

1 Like

I know person from NAS forum who loose 30 of 30 purchased seagate st3000dm001.
Personally i loose 5 of 5 purchased disks of this model.

1 Like

Western Digital acquired HGST in 2012. Same company now.

Note sure what definition you mean:

ā€œ(of an account) not necessarily true or reliable, because based on personal accounts rather than facts or research.ā€
It was based on personal and nonpersonal experiences. It was based on facts and I did my research, Seagates fail much more often.

ā€œbased on or consisting of reports or observations of usually unscientific observersā€
I am a very scientific person.

ā€œof, relating to, or consisting of [anecdotes]ā€
That is right, any kind of answer consisting loosely of what I said would be considered an anecdote.

HGST is under WD.

Sadly this anecdote is hilarious. =/

you know that just loosing a single file or god-forbid the DBs is bad???

I realize this is way late for the data at hand, but Iā€™ve actually had ZFS save my array - twice! - from failures like this. In both cases I was lucky enough that when I had two drives fail, at least one of them was still returning mostly valid data other than a string of bad blocks. ZFS recovered what it could, and flagged the specific files it could not successfully rebuild.

Seagate had one super bad model that lead to class action lawsuits. I donā€™t think thatā€™s a reason to ignore the entire brand, but I guess people have trouble letting that one go. Hey, itā€™s still their f up, so itā€™s not entirely unfair either. If you exclude this model there is no significant difference in reliability between the major brands. Some models do a little better, some a little worse.

Losing a db you can easily recover from with the worst damage being some stats on the dashboard that are wrong. They contain non-essential data. The only way your node could fail audits is when they are corrupt and canā€™t be accessed. But you donā€™t get disqualified for this, just suspended. So unless you ignore issues for at least a week, database loss is recoverable. Same for a single file. With the rate of audits, chances are you will never fail a single audit if you only lose one file. Though losing a significant chunk is obviously bad.