the problem with raid5 is that each data has two points the parity and data, while raid 6 has 3 points … thus by essentially voting the raid6 can identify which drive is wrong.
not sure most synology runs plain raid5 tho… but more like some sort of hybrid raid5 thing… but not 100% on that, but you might want to look at that…
rebuilding a lost raid 5 disk requires the system to read all the remaining data and compute it into the lost redundant disk data.
so it may take longer to rebuild it… than simply copying the data out of the array… ofc you won’t have to bother with having space to move the data to meanwhile… so there are both benefits and disadvantages to both solutions…
if you are running plain raid5 with that many disks you would be better off running raid6
also because the disks in a raid runs in harmony / sync with each other, it means that your entire raid5 array has the iops of a single disk… which isn’t great
tho i know many people like to run without redundancy, i think it’s worth the hazzle… just because one will not have to start over for minor issues, but you may run into some kind of iops limitation as your node grows in size
running multiple nodes would give you x8 the iops… and 1/7th extra disk space… ofc you would then be completely exposed to data errors, but storj isn’t to bad about that and it might be much easier to keep running that way… atleast on a synology.
the smaller the raid array the less the problem becomes ofc… if you are running a hybrid raid5 then you may have some mitigation of the regular raid5 issues, like checksums which makes the system able to locate which disk is in error and thus not overwrite the correct data when data is corrupted…
if you have that kind of raid5 then running something like 4 with 1 redundant is a pretty good spot imo… i know it costs extra space, but with 2x raid5 arrays like that you would have x2 the iops, and 1/7th less data again… ofc if you don’t lack iops or so then migrating to running raid6 might be my preferred choice…
TL;DR
if you got regular raid5 and not some fancy hybrid then go raid6
if you need more iops split up the array or like little skunk suggests make 8 nodes on the 8 drives for x8 the iops.
if i had the room i would copy out the data… ofc the max space you can use on storj will keep you limited at maybe 40tb max data anyways without multiple ip’s
and could take years to reach it… if it doesn’t cap out at 30tb nobody really knows how high avg deletion ratio will be.