well a raid would most likely kick the disk out… so it would seem like an instant death…
zfs does that also but i usually just shove them back in
atleast until i get them replaced… i rather have an unstable disk than no disk in a zfs raid…
not sure about the others but for raid6 certain the same.
out of them 5-6 disks i’ve replaced in the last couple of years, non of them was instant death and most of them are still storing data, just out of 24/7 operation… had one that died afterwards… just stopped completely… but it was already throwing errors for a while before that…
also most of my disks that went bad was due to me causing shocks to them… such as bumping into the unstable table my server was located on for a long time… after a couple of disks went bad, each time due to me bumping the table, i found a better table
i also been using used drives, but the last year i’ve bought only new 18TB’s … which seems to have helped a bit with how often drives go bad… tho already did lose one 18TB
might really need a proper rack lol, but its not really that uncommon for new disks to fail… i think the avg is like 4% or so for the first year… then its usually like 2% or less if one gets a good model.
best advice i got to anyone running stuff that works, is don’t change anything…
and don’t touch it… lol stuff tends to just keep working … its people that break stuff…
looking at the Backblaze disk AFR, there is a pretty wide gap between some models of disks, some are like 0.5% Annual Failure Rate and others are like 4% excluding the rare 20%+ AFR events that sometimes happen.
so running a single disk node can certainly be good, one might get lucky and run for decades without issues… or the node dies on a new disk in 3 months.
tho my main reason to use zfs with redundancy was to avoid software issues, i figured maintaining stuff and such could be a larger factor with storj software being new.
with zfs and redundancy errors are so unlikely to happen, that stuff should just run forever if the software is good.
the secondary consideration was that it takes years to get nodes to proper sizes, so losing them would be a big setback…
is it worth it to run raid for storj… yes and no…
old big nodes certain makes more sense to run on a raid and new nodes are basically just worthless, so running them on single disks makes the most sense.
also optimal iops out of the hardware from single disk setups…
raid is so restrictive on iops.