decommissioned my old TrueNAS System
and wanted to compare different ZFS pools:
- pool1: ARC only
- pool2: persistent L2ARC metadata
- pool3: special vdev metadata only
Disclaimer: This is far, far away from a scientific testing!
This is only good to give you a very, very rough estimate.
I tested how long it took lazy filewalker to run.
The storage node has 4GB RAM and, is on a fast NVME SSD and has a local DB.
STORJ data is a NFS share on a TrueNAS SCALE System with 64GB RAM (32GB ARC).
Pool1 was a single Seagate IronWolf 8TB drive.
Pool2 was a Toshiba N300 and a Samsung SSD PM871 256GB.
Pool3 I forgot to check which HDD I used. I think it was the Seagate IronWolf 8TB. As SSD I used Samsung SSD PM871 256GB
Other ZFS settings: Record Size 16MiB, Sync disabled, Atime off, lz4.
Here is how long it took to filewalk a 5TB node.
pool | first run | second run |
---|---|---|
ARC | 459min | 417min |
L2ARC | 88min | 85min |
special vdev | 79min | 78min |
My personal conclusions:
- With ARC alone, filewalker takes ages.
- ZFS needs some kind of cache for the STORJ workload
- L2ARC and special vdev work both great. Unlike special vdev, L2ARC does not need to be mirrored.
My open questions:
- Why does ZFS show absurdly high compression numbers for all pools that can’t be true? I have discussed this in other posts before. No matter which pool layout, zfs get compressratio will show 5.39x which simply can’t be true. It should be 1.
- How would a very big L2ARC behave during boot? Core has L2ARC still not reboot persistent by default because it used to drag out the boot process. For SCALE it is enabled by default
- Is the L2ARC solution scalable? Currently it uses 15GB for a 5,5TB node and X amount of RAM. Not sure how to find out how many RAM L2ARC uses. More L2ARC would take up more RAM.