i am really happy with zfs, it’s a very mature solution… so far zfs has only impressed me.
only thing that is really subtracts from zfs, is the problems with adding and removing drives from raidz pools.
the benefit that zfs uses Copy on Write and has multiple levels of checksums on everything cannot be overstated… if fixes one of the main issues when running raid5 type solutions.
raid with 1 redundant drive, usually this is a no go for the older raid solutions, because when you have an error there are two drives that has the data…
so if the disk spits out incorrect data when going bad, then the system has to guess which disk is lying… and without checksums it cannot verify which disk is lying…
thus it will have to use another metric, stuff like smart data and previous issues are then often used to determine which disk is lying.
however this does not always give the correct answer and thus your regular raid5 might reconstruct your raid array based on the bad disk data and thus corrupt a perfectly good array just because it had no checksums for seeing where the issue was when choosing between which of the two disks to trust.
ofc zfs comes with some overhead, and you will want an ssd for a slog device to reduce the io load… tho not strictly required, i would say it’s highly recommended because of the performance benefits it gives… it might even extend the life span of the drives due to roughtly halving the write loads.
i’m unaware of any solultion that comes close to zfs, tho stuff like ceph might be the future of storage, but that is more of a cluster solution.
zfs even makes a difference when running non raid setups, because of its checksums and constant usage of them, then you will be informed the second that there are issues with writing or reading the data…
thus you are much earlier aware of issues with a hdd or a cable long before smart or straight up data corruption becomes apparent, this gives you a real chance to get a head of problems, even when using just a single drive.
personally i will never use a non Copy on Write or non checksum based storage solution again.
but then again i’m not even sure i will ever store stuff without redundancy lol
so maybe i’m a bit biased.
if you are use to zfs, i would say it’s an obvious choice, even for people new to zfs i would recommend them using it, so long as they are technically inclined… i spent a lot of time looking for what storage solution i should use… and zfs was what i ended up deciding on…
it is a not the greatest choice on those mid range 2-4 disk setups tho… if people want flexibility and performance with limited hardware… zfs might be the wrong choice… but for stability and data integrity from laptops and all the way up to using near hundreds of disks… zfs will never be a bad choice… ofc stuff like ceph should be considered when one goes beyond 50-100 disk at 3-4 servers
it might be a superior choice on those scales.
haven’t really gotten around to learning ceph because my setup is still a bit to small for it…