RAID vs No RAID choice

Indeed I had not “the time yet” to install DCS properly. But the hint with the transfer.sh is nice.

Here you go

Raid-Comparison_v04.xlsx

or in my own DCS
Raid-Comparison_v04.xlsx

RAID6 gives more total storage than RAID5. Seems like there is an error.

Hi IsThisOn,

odd. I can not confirm your contention. Here it works fine.

4 disks each 8TB makes 24TB in Raid5 and 16TB in Raid6.
20 disks each 10TB makes 190TB in Raid5 and 180TB in Raid6.

Hmm… maybe this is some xlsx and LibreOffice Error.

First gives me a not found error. Second lets me download a xlsx file. If I open it with calc, it shows 16TB mac capacity and RAID5. If I select RAID6, the max capacity grows to 20TB.

I’ve opened it in libre office and can rebuild your error. It’s the toggle field Raid5/Raid6 that is imported incorrect. That controls cell “C13”. In MS Excel it toggles between 1 (=Raid5) and 2 (=Raid6). Libre Office makes a true/false story out of it. Just use “1” for Raid 5 and “2” for Raid 6 in cell C13. Then it should work fine.

The sheet originally was designed to clear the discussion wheather a Raid or a non-raid is best. The answer is given: It depends on the daily ingress.

Then it should work fine.

Yeah, I figured that out. Will add some changes later on that I think are valuable. RAID5 for example is so insecure, I would not consider this as an alternative to single disks nodes.

t depends on the daily ingress.

True but also other factors. How many drives you have laying around, how many ports you have, what failure rate you expect, what you pay for electricity.

This is inaccurate, and likely based on a bad articles that miscalculate probability of raid failure during rebuild. This topic pops us every so often on every single storage forum, including here. linking to avoid retyping

I am not going to comment on The Latest - Whether RAID 5 is still safe in 2019 - Digistor Australia because the maths seems wrong to me in both ways. Failure rate is probably lower but rebuild time (especially for small files applications like storj) is way higher.

ZFS mitigates the problem to some extend. If you have a read error, you don’t loose the entire pool but only damage a disk.

Scrub is a good early warning system, but I would not compare it to a 2 day rebuild operation.

Because I would not spend any money on my node like storj recommends themselves, I end up with 4-8 year old consumer drives, some even of the same batch. That is why I personally don’t trust RAID1.

1 Like

I dont use RAID at all, and my old 4tb HDDs already made me 700-1000 $ income in 3 years each.
So now i slowly change them for new bigger HDDs

4 Likes

I don’t think the math there is flawed - but what it really does is show that how most systems treat RAID5 style redundancy is flawed. RAID5 nukes the entire array because of one URE on a second device during a rebuild; raidz not only will still continue, it will flag the specific files that it couldn’t recover.

I run an array striped across multiple raidz vdevs (for purposes other than Storj) and I’ve encountered “URE during a rebuild” at least twice already, losing just a small amount of data each time instead of having to restore the array from scratch.