My Node lost HDDs

Hello together
a newly purchased HDD is defective and unfortunately, I have thereby 90% of all data lost…
Is my grade out now, after more than one year of good work? Or is there a sync?
:sleepy: :disappointed_relieved: :sob:

Note-ID: 1crwVC4w3z9DhCs9SMCqeXhqscSFEjnTMDEoYwGHuACdZ4ctCm

Thank you, for your feedback, no matter if it is good or slaughter!

you could try some sort of data recovery tool on the hdd, it’s rare that the data is fully lost…
it’s one of the things one needs to account for with new harddrives, their chances for failure is often higher than a drive that has been running for a few months…

if you cannot recover the data, the node is going to be DQ…
you shouldn’t turn it on before recovering atleast the majority of the data and even then the node might still die because of it…

the OS will attempt to read the data a few times, but then give up… data recovery software will apply a lot of tricks to try and read the data… and basically keep trying forever

check your cables, maybe try a different controller or put it in a usb case… it’s not always an exact science…

i’ve heard a recovery expert say it’s most often cheap power supplies that burns out drives and in that case you can desolder or remove the fuse diode on the hdd circuit board… to make it work again… tho it shouldn’t be depended on long term operation… its a measure to recover the data…

ofc in your case you might want a new drive on warranty… so start chopping parts of the circuit board isn’t going to make that easier…
tho one can buy circuit boards for hdd’s on amazon or ebay, so if the drive is fully dead because of a short or voltage spike, then one could buy one and exchange it…

but most likely not much to do… tho there are some of good data recovery videos on youtube, if you are lucky you can use those to help you identity the problem and see if you can fix it…

tho you would have to be very sure about what you are doing, because odd are most likely very good that you might blow any chance of getting a new drive…

i would try some recovery software… see how that goes, and if it fails just exchange it for a new one…

what disk was it?

and do check the cables and try another controller if you can… disks can be really weird when they are broken

It was a QNAP, TS-002 with two WD 4TB - disqualified! Well, I learned a lot about QNAP and HDDs. I’m going to create a new system again and this time with a RAID10 or minimum RAID5. Thanks for your inputs

I got concerned, I have TR-002 too … what’s wrong with them?

i am in the process of migrating… i will be running the zfs version of raid5 and mirrors one array of each to test out how it performs for my use case…

i really like the mirror idea… its iops is so much great especially when one gets into the larger size arrays and mirrors are very easy to manage and upgrade… ofc in zfs one cannot just remove a hdd from a raid array… which doesn’t help…

RAID? 0 or 1, my QNAP is frees aufter Update for 3 days…

Are you asking if I have RAID? This is what I answer right away: two separate drives in this case and two nodes on the computer connected to it … no problems …
Maybe you have something wrong with the USB cable …?

1 Like

reason why need to use raid5 even you lost some space with raid but your date is safe if 1 disk fail you dont lost date and easy to recovery fail hdd if you use more of 4x hdd like 6-10 recommend use raid 6 in case will dead more like 2 hdd on same time

1 Like

i just boot today spare server with actually 128tb on raid setup i have 92tb really i dont care i lost couple hdd space but im sure if fail some hdd im not lost data

1 Like

…until you start a RAID rebuild

one cannot really look at the number of drives in a raid setup, because it becomes a bandwidth problem… how fast can the system read the remaining hdd’s … thus the size each drive and it’s transfer speeds becomes large factors in designing a reliable raid setup.

currently i’m migrating my storagenode from my 3x raidz1 pool consisting of 9 hdd’s in pairs of 3 and still moving out my 14tb storagenode will take like a week… and i cannot seem to get it to run any faster.

and thats just reading a 70% full pool / array with thrice the iops of a regular 1 array setup
my next setup will be having 4 x 3tb disks in mirrors… which will have 4 times the read iops as your of your 128tb setup.

if we assume disk are more or less the same.