Alexey
February 1, 2024, 2:54am
9
Hello @oty ,
Welcome to the forum!
See: Topics tagged zfs
running a checksum filesystem isn’t for the speed its for the data integrity, zfs speeds up some tasks but raw writes and suchs will always be slower due to more iops being written to the drives… also there is a 3 or 4x minimum io amplification if xattr and atime is enabled, which they are by default.
and
Hello guys!
I decided to conduct some research to choose the optimal file system in terms of the performance/cost ratio per 1 TB. The series of articles is a summary of my personal experience operating storj nodes and does not claim to be the ultimate truth, but rather represents my personal opinion.
The storj node itself is a lightweight application and under normal/typical circumstances, it does not consume a large amount of resources in terms of CPU/RAM. However, even in normal situations, …
you should run one node per disk. If you use RAID with parity, the whole pool will work as a slowest disk in the pool. This is true for any implementation. The speed will be LVM ext4, btrfs, zfs. For the single disk configuration it will be the same. However, in case of zfs you may improve with throwing resources to it (SSD special device, more RAM, more tuning).
See why (it’s about BTRFS, but it’s a FS with a greater Metadata too, which is applied to zfs as well):
Some short notes from the top of my head, as I don’t have time to elaborate.
The -t 300 thread was about a default non-Synology setup on a single HDD. As you cheerfully agreed, it’s not optimized for speed, having duplicated metadata and such.
To support all these fancy new features, btrfs metadata is, by necessity, larger. This results in bigger RAM usage for caching metadata, and as such, when you reach the threshold where you can no longer fit metadata in RAM cache, you fall off a performan…
See also
My notes from performance optimization efforts on ZFS.
Hardware setup for reference: Node holds around 9TB of data today, running on an old 2012-ish era freebsd server; there is a single zfs pool consisting of two 4-disk RAIDZ1 VDEV’s. Out of 32GB of ram, 9 are used by services, and the rest available for ARC. There is 2TB SSD used as L2ARC, but it is not really being utilized, I just had it, it is not necessary in any way. To speed up synchronous writes I have a cheap 16GB Optane device mounte…