i’m running an old rack server, tyan s7012 dual socket xeon L5640 2.13ghz 4core / 8 threads pr cpu
48gb ecc ram, with 2 x LSI HBA’s serving a 12 x 3½" front loaded disk bays backplane.
storage is a zfs pool of 3 raidz1 of 3 drives each, + 1 old MLC (i actually think it’s a first gen consumer ssd) 256gb sata ssd on which run the OS + SLOG and a second more modern QLC 750gb sata ssd which i have solely dedicated for L2ARC and partitioned to 600GB to reduce wear.
(i really should restructure this a bit and do a maybe 200GB mirror boot pool across both ssd, to give the OS an extra level of redundancy.)
(was the best performing easy to manage setup i could handle, if i was recreating the pool today i would have done a 3 x raidz1 of 4 drives… i found out my onboard SATA controller can manage modern disk sizes, or maybe it routes it through the more recent HBA’s not quite sure… most hardware of this age run into a 2TB max drive size, but for whatever reason it doesn’t seem to be a problem.)
have a “small” 2 hdd x 6TB SAS mirror pool for personal usage, but most stuff do end up on the storage pool, because it has the SSD cache.
everything is running Sync always to ensure against data loss on power loss since i doubt my ssd’s have whats it called… PLP (power loss protection) i think it is… and for my amazing electrical safety i got a 10$ power strip xD with surge protection lol
and my entire network is rigged together with old 1gbit routers converted into switches… of which there is 3 - 4 or so… i really should get those replaced, but i was kinda hoping to get some 10gbit in there somewhere…
the server has two onboard 2 nic’s for a total of 4x 1Gbit ports.
the reason i run 3x raidz1 is to triple my zfs pool’s raw iops
because disks in raid run in harmony their IO will be limited to the slowest hdd in the array / raidz and no matter the number of drives the pool / array IO will never exceed what the slowest drive can manage…
i’m running 512k recordsize on the storagenode dataset and else stuff is all the zfs default of 128k records, however zfs records are variable size so the recordsize is the max size, not the size that all records will be… however like mentioned in the previous comment, the RAM allocation can get a bit excessive with 512k recordsizes… because it release ram for a expected number of records or something like that… advanced stuff i don’t know all the details… so if it expects like say 5000 records, then it will dump 5000x 512k memory … so it adds up real quick…
so even if the 5000x records are just 4k records it will still dump the max amount because it wants to be ready for the data or something…
like if i start a scrub it will do like 500000 reads a second ofc most of that in the arc… but that will make the zfs dump most of my arc for no good reason…
but i figured it will take basically forever to switch to 256k now… and it does make my scrubs run incredible… last scrub on the 14 tb of data on the storage pool took 12 hours
and i can most likely shave a couple of hours off that if i delete some old test files i made which are just a few million empty 10byte files
i really like the iops the pool got with this 3x raidz1 setup… the hdd’s alone will do like 4000 reads a second.
all of my drives are a mix of 3tb and 6tb enterprise sata + 2 sas (later found out that i cannot use sas in pools with sata… which gave me a ton of grief and data errors… zfs took it like a champ tho.
never lost a byte… and not for lack of abuse… thus far i’ve removed beyond the redundancy drives while using the pool… (can’t recommend that tho)
had a while i was trying to get gpu passthrough to work… which ended up making the server unstable, and took me a few days to figure it out and solve the issue… meanwhile the server just rebooted at random… must have hard reset the machine 40 times… and still didn’t loose a byte
thats the power of sync always… never had my storagenode have any corruption of the data bases or such damage…
oh yeah i also run patrol scrubs on my memory, ofc there are many more even less relevant things…
i wrote a slightly structured HA setup, if you are interested in the thought process behind trying to integrate basic HA concepts in your system… ofc if the hardware allows…
you can find it in the SNO Flight Manual - here
maybe i should write one for RAID… also… hopefully it will be a bit more structured that this one… tho i have tried to make it better and going through it again… should be worth a read, if you made it this far lol
but yeah my setup is lots of old hardware, the server was mostly just because i really wanted a server so i could run some vm’s and play with enterprise hardware…