RAID5 is not RaidZ1 and is not BTRFS analogue, it’s a hardware or software RAID5 without checksums check and auto-correction. Otherwise topic starter would mention that.
I understand. Conventional RAID also supports scrub (Synology calls it something else though, I don’t recall, “verify” probably?), and can repair data lost due to bad sectors, but cannot handle bit rot.
Still, my point was the rate of failure will not be drastically different on either of raid designs, and will be still low, far cry from “guaranteed failure” the article implies.
Actually I use BTRFS with raid5, which is quite alright in a synology. sorry for not mentioning it. I scrub it every 3 months.
I also use Hitachi and Toshiba disks, which according to BackBlaze can be 10 to 100 times less risky than the usual WD’s and Seagates and whatnot.
Adding it all, although aware of the raid5 risks, I trust my rig.
Yes, you are right, I have 22 physical disks connected to 11 raid arrays with a current capacity about 21TB.
I have one big node, which I have been continuously growing for about 3 years. This strategy paid off the most for me, I didn’t have any problems with this strategy.
I spin this node among the first, maybe I have an invitation somewhere.
So, I have one strong and very old node .
Starting a new node every moment / months seems for me like a waste of time and energy.
Of course, my opinion .
I have over 60 nodes, each one use single disk. I am over 3 years developed all this.
Most disks are over 5 years old. If environment is good then disk work very long time.
So i do not waste space for RAID, and getting paid for every used GB. If you RAID card go crazy( I have seen this on enterprise level) whole raid is gone, so this is also single point of failure.
If you have 4 disk in RAID, after raid fail, you loosing all. If my one of disk gone or event 2 disks, i save 75-50% of data. Also i have 25% more space for data.
I use some soft to monitor SMART, if i see lor of read write errors, then i planing chenge disk.
I have lot of WD perple disks, 4TB. They have 5400 spin rate, some they make less heat, and long lasting.
And what is your hardware (apart from the disks)? How do you control 60 disks?
Or did you mean 4 disks?
BTW, according to BackBlaze, temperature does not limit the life of the disks. Variation of temperature does. They found that disks near doors had a higher fault rate. Disks in hotter places (constantly) or in colder places (constantly) had the same fault rate.
How I grow my now is that I put all new nodes in my 4x4TB Raidz1, and as is grows around 6TB, I move the data to a 8TB harddrive and leave it there. Rinse and repeat.
I would rather lose a single node, than loose a large node.
I have 7 servers, each disk is 1 node, so it is 60 disks. smalest server have 5 nodes, bigest 14 nodes. They are all windows GUI nodes.
Do you have IPs for each node or 1 per server?
if it question for me, then IPs depending on situation, if node is full then i use one ip per several nodes, if new node so separate ip