Using ZFS Arc cache on non-ECC RAM

Every filesystem trusts the ram blindly, the OS trusts the RAM blindly… nobody has checksums on things in ram. If files are written asychronously, they are stored in RAM until written to disk. With every filesystem this file can get corrupted if the RAM flips a bit.
zfs just has more features that depend on ram and is designed for a server environment with data integrity as top priority. that’s why every article puts more empasis on ECC RAM because you will need it for proper data integrity as it doesn’t make sense to make the filesystem “rock-solid” if it can easily be corrupted by another component that isn’t as safe as it can be.

The way you explain it makes zfs look like a horrible choice for SNOs, only because you are expecting your RAM to be damaged so badly it will destroy the whole filesystem. If your RAM is damaged this badly, it might as well corrupt your ext4 or ntfs filesystem.

I use zfs in my homeserver with non-ECC RAM and I trust it to work well enough, at least better than ext4, btrfs or ntfs. Ultimately if my RAM should ever fail then yes, my zfs filesystem might get destroyed, probably just like an ext4 one would. That’s what backups are for.
But generally, the chance of stored data (backups, archival data, storj data) to get silently corrupted is very low because once written, it doesn’t get loaded to RAM, modified and written back to disk. So unless the whole filesystem gets corrupted, any such data stored is safe (safer than ext4 and ntfs because zfs has checksums and can repair itself if the HDD has a bad sector).
But if you have 200$ additionally for ECC RAM to build a “rock-solid” storj node then by all means…
If not, then it’s ridiculous to make zfs worse than ext4 or ntfs for SNOs because the chances of visible data corruption are about the same. Especially since we are comparing consumer-grade hardware for all components.

1 Like