Zfs discussions

the way i found it described was that they wasn’t allowed to be the in same array… but i got a couple of empty bays, and each of the SFF-8087 ports from a HBA supports 4 disks… which seems to be fairly common, so i put them there and made their own pool just to try it out and it worked like a charm… pretty sure where i found the answer they said separate array’s but i guess that’s might also be a matter of interpretation, he may have meant it like you say it… i duno…

i know it’s works and the errors i got from having sas and sata mixed was just jumping around giving me random errors at random disks… was hell to troubleshoot, ended up in just trying stuff i hadn’t thought about… basically took everything apart and tested basically every setting in the bios…
because after a while i realized it wasn’t an isolated problem… it would case read and write errors, but move two disk around and a 3rd might stop getting errors and it would jump to a 4th disk instead…

quite interesting and tricky to solve, when i realized it wasn’t my 2nd hand drives that all where bad…
actually ended up buying new drives just to because i initially just through… HA thats what you get for buying 2nd hand drives you fool… :smiley:

i don’t think there is much elaborate about it… okay maybe a little bit… i have had the backplanes… 3x 4 port on 3 different controllers, all of them LSI tho… 4 if i count the current dual HBA cards as 2, but essentially they are each the 3 for the backplane they control… most likely have been switched around tho, but they have the same generation of the chip, so essentially one.

also think they worked fine before i installed proxmox… when i installed linux i lost my IPMI, my led’s in the trays got all weird and i couldn’t go in through software and easily tell the software to tell me which disks to take out…

i like the concept of linux, and it’s nice that i can really troubleshoot and modify stuff… still kinda janky… but in the past linux never survived me more than a couple of weeks… so atleast it’s more robust when people try to tear into it…
not sure i had all the activity led’s tested back then tho… so one or two of them might simply be broken… there is also that weird sata sas thing with the led’s… for some reason most or many servers seem to use a different method for lighting the leds when using sata and sas … so like the power light won’t be on, with the sata drives, but they will with the sas… kinda weird… seen it in some server related videos… never really got a good answer for why that seems to be a thing.

i like the DC P4610 it’s a very nice drive was considering that when i ended up buying my IoMemory, but i kinda wanted it to be over PCIe to save ports, and it had some of the lowest latency i could find and was able to do parallel data streams… apparently it’s made to be a swap drive… so i could just go that route i guess… not sure how that would behave with zfs tho…

only problem with that is that i’m running 512B sector size hardware on my main pool because from my initial testing with storj it looked a bit heavy on the iops and 4Kn was generally more expensive on the 2nd hand market… and couldn’t find a great reason for doing 4Kn aside from slightly higher throughput at very high data transfer rates… which seems mildly irrelevant… turns out the reason people like 4Kn over 512B is that it takes 8 times the memory to manage the space allocation tables for something like a swap drive…
and tho people only claim it shouldn’t matter, i have been unable to run my L2ARC and SLOG PCIe SSD in 4Kn, so was forced to reformat it’s OS to 512B sector sizes, and since then it’s worked like a charm until it threw the drivers… because they where custom no doubt… i kinda knew that sort of stuff would happen when i installed them… but it was all i could find for debian… bought it and checked it could do linux… and it was like windows, centos, rhel… debian… turned out it was like debian 8 or 9 almost ended up in me bailing on proxmox and going to … the other major enterprise linux that isn’t exactly a hypervisor type deal… the name escapes me right now. SUSE!!! ofc

it sounds really good, ofc one would want the enterprise version… so and again even OpenSUSE or whatever it’s called, i wasn’t sure i could get drives and was just starting to find my footing with proxmox and debian… so the reason one place said debian buster was because there where open source drivers based on a competent rewrite of the originals, just with lots of older hardware comparability removed, because nobody uses a Fusion IoDrive Duo or such anymore… even if their performance are amazing, the hundreds of watt’s of power draw is just insane for something a modern ssd can get close to for 10watts or 5watts

so they cut all that out of the drivers which makes good sense, and the drivers are great… they just forgot to make them so they would survive a kernel update… woops, but off all the mistakes they could have made i’m very impressed… i’m not even using a card they the drivers on… and worked flawlessly.
maybe… have had some weird output from the drive info for netdata monitoring…

2000 years latency or 0 seems to be the states it will do… lol and transfering 80 TB/s
i suppose that could be related to the drivers, but it’s been the only thing i’ve seen…
i got 48GB ram and then i just like the L2ARC to support that, it would just make so much more sense with my 1.6TB FusionIo2 to be shared across them all, so i use like 1TB for L2ARC.

i did consider Optane but they where either to small, to expensive, to slow at writing (because i wanted to be able to support 10gbit nic, thus the L2ARC / SLOG device would need to do atleast 1.2GB /s sustained and preferably more.

Optane aside from the models near or past 1000$ the P900 and P910 i think they are called
their speeds are good and their iops are great… epic not the best, but like 2nd place i think, but they have their durability… not sure about the RAM versions of them… didn’t look at those because they wouldn’t work with my old hardware.
and then there is the whole PCIe 2.0 system running with a PCIe3.0 card
not exactly the way one wants to go and the SSD not being cheap i didn’t want to risk incompatibility, they say it will usually work. but sometimes it can cause problems… the manual for the SSD tells me to go into the BIOS and turn of my CPU power management, because it might be a bottleneck for the SSD if the CPU sleeps to deeply… lol it’s so fast that when it’s running the computation of the CPU is measurable faster because it directly connects to the cpu and expands it’s Caching abilities or something like that.

and it will do sustained read writes of 1GB/s and 100k iops + both read and writes sustained
i know it’s old, but it was the optane of it’s day… optane is just a bit more reliable, gets lower latency…
but their capacity just isn’t there.

initially i wanted to do one big pool, but since we were told that we had to use separate storage for each node, it changed a bit… and running vm’s on the big pool didn’t work to great… but at the time i was rebooting a lot more… so that was also part of the problem.
also want to move towards Ceph eventually… but that’s so far away for my setup that i’m not thinking of that yet.
and running sync always on zfs is also rather IO heavy, but seems to be worth it tho, imo

didn’t really think it was going to end up being this big a project… but atleast it’s starting to work…

just got PfSense up and running… running in a VM with Emulated NIC’s, works like a charm… was a bit worried about the latency maybe being bad… but seems fine… get like 9-11 ms when doing speedtests… have some odd network speed limitations tho… i seem unable to get my full 1gbit for some reason…
and i cannot figure out why… i get like 300mbit through my new switch …(much later)
turns out that trunks / STP is pretty important when making a multi switch vlan network. :smiley:

they really have to many acronyms in networking when the start reusing them… i thought STP was shielded twisted pair, but apparently not when it comes to vlan
i suppose thats why they call it trunks, ofc in my new switch one can also trunk ports together which is not the vlan trunking… O.o on a managed vlan able switch … makes perfect sense.
atleast i found the issue… finally