Raid 5 failed. 3TB node dead. Recovered partially and going through GE thanks to forum

You are welcome!
Don’t worry I just joking, today we have a lack of positive, I just try to bring some smiles :slight_smile:
I really glad to hear that you successfully did GE and save deposit and customer data.

1 Like

To me, the model number ST3000DM001 is synonymous with data loss. Let’s just say that I am never buying a Seagate product again in my life.

3 Likes

I have changed the name of the topic to better reflect the situation.

3 Likes

So I can delete all DB files and my node would still work? Sweet. I didn’t know the satellites APIs was not served by the DB. That they are only used for the dashboard!

You are right, zfs raidz1 not have this troubble. Rebuilding array process in ZFS calling resilvering and it have significant differences with classic rebuilding:

Zfs resilver only existings data, not full entire array.
You need at least 10 errors before the ZFS stops the process and ejects the disk from the array. Perhaps this number is customizable.
You can always manually return the drive and continue from the same place.

In classic raid5 disk was failed not only by error but because timeout on read badblock. Whats why is bad idea to use desktop drives with such raids. They have timeouts in minutes when raids wait no longer 10 seconds usually.
The ability to controll this timeouts on hdd calling ERC or TLER and it usually absend on desktop drives.

1 Like

A while back file metadata was moved to be stored with the piece.
But I wouldn’t be that casual about it. I know there has been an issue with repair traffic when db’s are missing data. So you should still avoid losing data or corrupting them as best you can. But they should not be vital to audits as far as I know…

2 Likes

so whats true here? I mean someone making a bold claim will back it up by some evidence right?

Amazing! I didn’t know they changed this. So far Storj has been implementing nearly everything I wanted, great job!

You can recover from it and audits won’t fail. But there have been issues with repair traffic, like I already mentioned. My evidence is reading every forum post so far and having a pretty good idea about what does and doesn’t definitively destroy your node. Additionally Storjlabs has communicated about their intentions around the db’s not being essential. You can use the search function to look for db issues if you like. I can’t exactly link you to a single place to support these statements as it’s based on many many reports.

What’s however a bit problematic, is that you can get suspended and even dq’ed for database problems, even though DBs are not essential… That doesn’t quite fit together in my opinion but storjlabs is still improving things, just like the metadata was changed to be stored with the data pieces.

Currently you can only get suspended and disqualification after suspension seems to be turned off for now while some db issues are being ironed out. Yes, it’s not perfect yet, but they’re trying to be fair about it at least.

deleting the database is recommend procedure in many cases… such as

so yes @BrightSilence is most certainly right that the database can be deleted if need be…
ofc one will need to recreate it afterwards and i’m pretty sure i’ve also seen little skunk recommend removal of the database in certain cases.

That’s right. However, the stat will be lost. In case of orders.db some unsent orders are will be lost and not paid.

2 Likes

Just FYI - 5th Seagate died. I think that’s the last one.

2 Likes

I think that’s the last one.

Fingers crossed. :wink:

Oh I mean I don’t have any more Seagates. lol And am not going to buy any.

Lucky enough you now have 2 manufacturers left…fingers crossed these won’t each have a single drive fail on you, because that would mean you could never ever buy any HDD again :frowning

1 Like

People can do what they want, it’s just not smart to base that decision on personal experience. Anecdotes are useless, my personal experience is completely different. I currently have 10 Seagate drives in use, and 3 HGST. None of them has died for at least 5 years (could be longer). The oldest ones are now 14 years old. Yet in the past 2 years my last 3 WD drives have died. Does that mean I won’t buy WD anymore? Of course not. I’ll buy whatever is the best deal at the time I want to expand. That just happens to have been Seagate for quite a while now. But I have no specific loyalty.

1 Like

i kinda try to steer clear of seagate… but if they start consistently making good drives then sure ill buy seagate… seagates really bad rep i think comes from the major flooding event the better part of a decade ago… their drive reliability after that was terrible at best… these days their drives seems fine… but i haven’t quite forgiven them… also saw someone working in data recovery saying don’t get seagate… their platers tends to loose their coating which then hits the heads and your entire data just becomes silver snowflakes inside the drive…

but thats one guy saying that… might not even be a thing… i know i’ve had a few seagates die on me, so i want to punish them by no repeat business… i’m sure they are fine these days… i mean how long can a data storage device company survive with a annual failure rate of maybe 20-30% on drives… that must be killer for business and something people will whine about for years and years :smiley:

on the upside i guess that just means cheap reliable drives for those brave enough to buy seagate lol

4 Likes

Lucky enough you now have 2 manufacturers left…fingers crossed these won’t each have a single drive fail on you, because that would mean you could never ever buy any HDD again :frowning_man:

Are you implying that I had just one Seagate drive fail on me? :smiley: