Bandwidth utilization comparison thread

yeah forgot to mention that option, but was moving from 3 drive raidz1 to 4 drive raidz1 so sadly couldn’t do that… when i started using zfs i was of the arrogant notion that i could improve upon the settings… sure there are certain fine tuning that one can do, but i’ve learned that ZFS has very good default configurations, and now for the most part i don’t tinker much with them.

tho i do run 256k recordsize on my storagenode to make migrations faster, didn’t really account for the advantage of using rsync over zfs send / recv, the reason i was using rsync was to zero my fragmentation, not that it was bad… but i had been testing dedup… just on my vhds for my vm’s
but it caused my fragmentation to rise pretty sharply… and haven’t really tried using zfs send / recv yet…

the reason i don’t run larger raidz’s is for iops, with 4 hdd’s in a raidz1 and only 6tb hdd’s it doesn’t take very long to resilver… but haven’t really tried that to much either, but the math behind it is pretty basic… my setup only needs to read 3 hdd’s to generate 1 hdd worth of data for resilvering.
i also kinda like that each pool can be on 1 sas port on the hba… tho not sure if that really matters, but i figured it might… and 4 hdd is a nice spot for utilization of capacity with only 25% lost to redundancy.

i do try to not punish my drives to much, which is also why i like having decent iops, if one can call it that when having 8 drives in a pool and getting 2hdd’s worth of iops lol

does zfs send / recv really create that much more load… or i guess it wouldn’t have the same iops demand as it transcribes everything instead of copying each file individually… but then again if its many small files the would most likely be a record for each file and thus that would almost be the same…

ofc i guess its a lot more sequential transfer instead of spending a good deal of time on seek when using rsync… my experience with zfs is very much still in the lots to learn stage… but i already learned a ton :smiley:

been pondering raising my metadata % in the l2arc / arc because it seems to give a great boost… maybe… ofc there is the special device option, but then there is the whole, if the special device dies the whole pool dies… thus one would need like a mirror to depend on it really…

and tho i trust my new old pcie ssd to have plenty of safeguards… then i don’t really like the idea that if it dies the pool is toast…
oh yeah and i run sync always to limit fragmentation and with a PLP SSD it’s my backup in case the power goes out… that way the odd’s of file corruption or such issues is basically none existing, does require a pretty good ssd tho…

whats the advantage of using uuid… is that the one you can configure by GPT?
i’ve been using dev/disk/by-id which i kinda like, tho it is a bit annoying that zfs will be very adverse to letting a drive go again… i hear using the GPT name or partition name or whatever is the way to go, because then one can create a new GPT and the identifier will change and zfs will stop trying to use the drive…

is kinda nice that the /by-id name is made from the serial number and thus is printed on the drive sticker by default… which is a nice help at times… never did manage to get that bay led blinking thing to work, thats one of the feature i really miss from running the megaraid software in windows.

any actually if we want to go into details, then zfs per default doesn’t autosize to fit the drives before you turn it on… :smiley: but ofc that usually happens pretty early and then one doesn’t have to think about it again any time soon, if ever…

ZFS is very awesome, if rather confusing at times… been trying to increase my query depth, because i’m just plain bad at some of this stuff and all my drives are in one … something on the HBA’s some ID thing i forget what is called… and zfs will do query depth’s based on the number of … something with v i think… not volumes, not virtual drives…

but have since found out that it doesn’t seem to be a problem… it was just my system creating a ton of random iops, and thus my ssd’s couldn’t keep up.
sync = always is a tough one to pull off… but it has it’s advantages.