Alexey
November 24, 2022, 7:56am
14
You again uses the filesystem with checksums and autohealing. And noticeable, that Synology doesn’t call their analogues as RAID5/RAID6. Because they are improved versions of these topologies.
So, I proved this claim of not reliable RAID5 on my past work 6 times without intention to do so. It was resulted to decision to replace RAID5 to RAID10 in all 13 branches of that company.
You may also take a look on
Hello,
I’m running a storj node on a qnap since nov-2021, it was finely running and had now about 3Tb data.
Unfortunately the 03 sep I got a disaster on my NAS and lost the complete filesystem holding storj datas … (basically a long power shortcut while sleeping, the nas was improperly shut down as the battery went off before auto-shutdown. Then my nas was bugged and loop-booting, once i finally went in degraded mode and stop this horrible loop I noticed that my raid volume was corrupt beyond…
I had an installation issue that I finally resolved today.
I’m now stuck at another problem.
[folaht@Stohrje-uq /]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1,8T 0 disk
├─sda1 8:1 0 517,7M 0 part
├─sda2 8:2 0 517,7M 0 part
├─sda3 8:3 0 1,8T 0 part
├─sda4 8:4 0 517,7M 0 part
└─sda5 8:5 0 8G 0 part
mmcblk0 179:0 0 29,8G 0 disk
├─mmcblk0p1 179:1 0 213,6M 0 part /boot
└─mmcblk0p2 179:…
Cheers guys! Just wanted to pop in and confirm that I received the GE payment. It seems that not loosing hope did the trick
Hello,
Today decided to add 5th drive to the RAID6 array for storage expansion. The disk appeared in fdisk -l, also i double checked the state of raid after booting with extra drive, and it was clean with 4 drives. i’ve created needed partitions on the new drive, added to the mdadm. At this stage 5th drive suppose to stay as “spare”, but i’ve noticed very strange situation. Instead of 4 healthy drives and 5th as a spare, i found that 3 drives are healthy, 4th drive is re syncing after degradat…
And continue to believe to reliable RAID5/RAID6 (not zfs/btrfs).
1 Like