Zfs discussions

I know you have zvols becouse you always talkIing about zvols and blocksize.
I asked many times why you need them on top of the pool. For storj.

2 Likes

i appreciate you wanting to help and what zfs things i’ve learned from you, but there seems to be a major failure of communication or understanding between us.
i’m just here to learn and discuss options and optimizations for ZFS.

I guess I didn’t understand your message correctly, so I deleted my message. But if it reaches you I don’t refuse any my words

no worries, they are only words… people tend to ascribe a lot of meaning to words, when in fact most people understand each word in their own different ways, which can make communication difficult at the best of times.

and then when working with complex subjects… it doesn’t get any easier and nor do we get less confused lol.

2 Likes

Okay so stupid question… how exactly do i set the vdev iops max parameters… i need to increase my query depth…

i tried searching in the manual and been reading about it online, but most seem to just assume one knows where to adjust it… my zfs wants to run Q1 on all my devices because i might only have one LUN i think… i suppose if i cannot figure out how to do this… the many a deep dive into trying to configure that will yield better results… anyways HALP

Man, there is no silver bullet.
You need to understand your workload and optimize your ZFS for your workload.
Only learning details of ZFS in deep can help you.

i just need access to these in ZoL
i cannot find them described in the manual, its to adjust my query depth because the default query depth in zfs is like 10, or was atleast… pr LUN and because i only have 1 LUN i only get Q1 data transfers to my ssd’s thus i only get 4000 iops on a device that can do 56000 at Q16

# Change I/O queue settings to play nice with SATA NCQ and
# other storage controller features.
# In newer ZFS implementations, the following oid's have been replaced ...
#vfs.zfs.vdev.min_pending="1"
#vfs.zfs.vdev.max_pending="1"
# ... by these
vfs.zfs.txg.timeout=30
vfs.zfs.vdev.sync_read_min_active=1
vfs.zfs.vdev.sync_read_max_active=1
vfs.zfs.vdev.sync_write_min_active=1
vfs.zfs.vdev.sync_write_max_active=1
vfs.zfs.vdev.async_read_min_active=1
vfs.zfs.vdev.async_read_max_active=1
vfs.zfs.vdev.async_write_min_active=1
vfs.zfs.vdev.async_write_max_active=1
vfs.zfs.vdev.scrub_min_active=1
vfs.zfs.vdev.scrub_max_active=1

i just can’t figure out where they are suppose to be located…

No problem, here is all possible parameters.

would be great if you could help, been at this particular issue for like maybe 3 weeks now… or i give it a few days then end up giving up and coming back to it later when it annoys me enough again.

pretty sure i found the exact problem now tho… even if i was on quite the number of detours along the way xD

If you need to translate these parameters to ZOL:

Please Open the link that I provided earlier and try to search, here is an example:
vfs.zfs.vdev.sync_read_min_active=1 on ZOL: zfs_vdev_sync_read_min_active=10

You can put your new parameters here/etc/modprobe.d/zfs.conf

# change PARAMETER for workload XZY to solve problem PROBLEM_DESCRIPTION
# changed by YOUR_NAME on DATE
options zfs PARAMETER=VALUE

example:

options zfs zfs_vdev_sync_read_min_active=10

1 Like

They say a monkey given infinite time can compose Shakespeare combined works… i suppose the real trick is for the monkey to know when its done…
but this monkey has benchmarks, thanks for heading me in the right direction, feel a bit less like a headless chicken now…

Uhhhh so many options…
this is such a bad place for me to get access to lol…

as usual the documentation was slightly lacking in details…
did manage to get it to work, on the 6th or 7th reboot… turned out i had to use
the exact method you suggested… :smiley:
and then because i’m booting on ZFS

$ update-initramfs -u

after each change for it to apply when i reboot.

and the first successful change didn’t do anything to fix the problem i was hoping to fix
but now it should be smooth sailing atleast for testing out if i can fix the issue this way…
unless if i break my zfs… but ill try really hard not to do that… lol

thanks again @Odmin

@Odmin point you to permanent settings.
Realtime settings of module parameters i point early in this topic

cat /sys/module/zfs/parameters/zfs_vdev_sync_read_max_active
for readings
and
echo value > /sys/module/zfs/parameters/zfs_vdev_sync_read_max_active
for writings

well it’s not totally that simple, some values are loaded during reboot and will not change even tho changed in the files, i my case duno if this is due to me running zfs as boot or that the ones i wanted to change was non variable parameters or some such thing or so they called it in the documentation.

i would get a permissions error and no effect, i checked the permissions as suggested in the documentation, but it wasn’t at all clear to which values could be changed or not… or maybe one could look it up under the certain parameters.

no matter didn’t want to work… may have managed to change it temporary but then i would test and it wouldn’t work and i would cat the parameter and it would just be back at 10.

anyways works now… and ill document what i do in the file and keep the defaults in there also.
now its mostly a matter of seeing if i can fix the issue this way…

ill give the temp method a few more trail runs, see if it sticks…

yeah i just noticed that, it’s been a long long time since i’ve actually seen half decent ingress
but now that it jumped to the new internet connection it seems to help a lot… i’m sure the guy i was sharing the old internet with was happy to go down to being alone… might put one node back for the last 10 days of my subscription tho… just shy of 45gb ingress here to across my 3

yeah i’m running hus726060ala i think it is, i’m running 512B sectors… maybe a little bit on purpose… there seemed to be a lot of iops and i figured it might be better for this use case… but i suppose it all started with me being able to buy a stack of 512B drives cheaply and then ofc everything else had to fit around that.

the allocation tables for that tho gets kinda expensive… needs like 27GB if i wanted to use my IoMemory SSD as a swap drive… compared to like 4GB at 4K
so i’m most certainly going to move towards 4k.

my setup is 2x 4 disk raidz1 so very similar iops specially since the disk are very similar.
ofc you would have a half the write iops and lower sequentials speeds, but for storj wouldn’t expect to see much of a difference, duno how high my iowait gets during a scrub… don’t think it’s anywhere near those levels… but i suspect much of the metadata iops is handled by the l2arc.
there seems to be a big difference when it’s off and on, especially if it’s the 2nd scrub, it seems to run a bit quicker and not generate as much iowait.

and then ofc SLOG which will do the most immediate effect due to it not storing the ZIL on the disks.

one change i’m looking into is that i would like the l2arc to be across the different arrays i got, the way i understand it, then it will only support one of the arrays, would be very nice if it would act just in support of the entire ARC instead of pool based…

i went with sync always to push random write iops into a more sequential data stream, which seems to work quite well and it ofc also long term will limit fragmentation on the array.

still not even a year into using zfs, proxmox, pfsense and linux so fumbling around a bit still
tho have been finding my stride again…

zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
bitlake                                        17.7T  26.0T     26    169   445K  1.58M
  raidz1                                       17.0T  4.81T     13     78   386K   692K
    ata-HGST_HUS726060ALA640_AR31021EH1P62C        -      -      3     19  96.6K   174K
    ata-HGST_HUS726060ALA640_AR11021EH2JDXB        -      -      3     19  96.1K   172K
    ata-HGST_HUS726060ALA640_AR11021EH21JAB        -      -      3     19  97.1K   174K
    ata-HGST_HUS726060ALA640_AR31051EJSAY0J        -      -      3     19  96.5K   172K
  raidz1                                        689G  21.2T     12     90  58.3K   930K
    ata-HGST_HUS726060ALA640_AR11021EH1XAPB        -      -      3     22  14.6K   234K
    ata-HGST_HUS726060ALA640_AR31021EH1RNNC        -      -      3     22  14.4K   232K
    ata-HGST_HUS726060ALA640_AR31021EH1TRKC        -      -      3     22  14.8K   234K
    ata-HGST_HUS726060ALA640_AR31051EJS7UEJ        -      -      3     22  14.5K   232K
logs                                               -      -      -      -      -      -
  3486798806301186697                              0  5.50G      0      0      0      0
---------------------------------------------  -----  -----  -----  -----  -----  -----
opool                                          4.18T  1.28T     68     53  32.6M  14.9M
  mirror                                       4.18T  1.28T     68     53  32.6M  14.9M
    scsi-35000cca2556d51f4                         -      -      4     34  3.52M  14.4M
    scsi-35000cca2556e97a8                         -      -     63     19  29.1M   507K
---------------------------------------------  -----  -----  -----  -----  -----  -----
qin                                             190G  5.25T      1     55  5.82K   527K
  mirror                                       95.0G  2.63T      0     27  2.94K   263K
    ata-TOSHIBA_DT01ACA300_Z252JW8AS               -      -      0     13  1.41K   132K
    ata-TOSHIBA_DT01ACA300_99QJHASCS               -      -      0     13  1.52K   132K
  mirror                                       95.0G  2.63T      0     27  2.88K   264K
    ata-TOSHIBA_DT01ACA300_99PGNAYCS               -      -      0     13  1.48K   132K
    ata-TOSHIBA_DT01ACA300_531RH5DGS               -      -      0     13  1.40K   132K
---------------------------------------------  -----  -----  -----  -----  -----  -----
rpool                                          89.3G  49.7G      1     36   111K   582K
  ata-OCZ-AGILITY3_OCZ-B8LCS0WQ7Z7Q89B6-part3  89.3G  49.7G      1     36   111K   582K
---------------------------------------------  -----  -----  -----  -----  -----  -----

running storj on all 3 pools the rpool is the OS SSD
had to do a reboot after i messed up my network configuration trying to get all my new vlan setup, then because i had used some custom open source drivers for my L2ARC / SLOG device and the kernel was updated after reboot, the driver was dropped… haven’t gotten that fixed, yet but it’s nearly at the top of my todo list…

it’s why the logs on the bitlake pool is dead…
one interesting thing about the bitlake pool is that the avg read iops is near equal on both raidz1’s even tho one has about 90% of the storagenode data, seems to indicate that older data is much less active.
but can’t really complain about that since it means i don’t have to worry about it being unbalanced and it should balance out a lot over time.

and opool iops ratios looks a lot like yours because i’ve been scrubbing it a good deal… the one sas drive is on it’s way out and keep throwing errors… might have to pull it one of these days and see if there is anything i can do to try and improve it… maybe a bit of contact cleaner and some insulation from the metal HDD caddies…

they might not be designed for disks with this bulky a design, so i think the test points on the pcb might at times create leak current… or it seemed to fix another drive that was giving me grief…

damn cheap chenbro case, not only was the caddies way to expensive, the case is also kinda janky, should have gotten a super micro or ibm

got some 5 vm’s running but the storage pools are mainly just storj, everything else is running off the OS drive… but need to get that settled soon… running low on space tho it’s partitioned to 60% of capacity, not sure if it’s got any by default… but want to get that mirrored just don’t have a good partner ssd for it…

was thinking of pushing it to the PCIe SSD but after it’s recent downtime, thats not likely to happen…
might do some internal usb 3.0 boot and then have the OS migrate a copies of itself across multiple pools and thus always be able to boot from something…

a mirror solution might just be so much more simple and easy to approach… :smiley: but then i have no redundancy if the onboard sata controller gives out… but it’s not to high on the list, one of those problems that aren’t really a problem presently, so when i come up with a great solution and an excuse to implement it.

ran a
zpool iostat -v 600 to get some current stats

---------------------------------------------  -----  -----  -----  -----  -----  -----
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
bitlake                                        17.7T  26.0T     11    215   468K  2.24M
  raidz1                                       17.0T  4.81T      8     99   446K   981K
    ata-HGST_HUS726060ALA640_AR31021EH1P62C        -      -      2     25   116K   246K
    ata-HGST_HUS726060ALA640_AR11021EH2JDXB        -      -      2     24   113K   244K
    ata-HGST_HUS726060ALA640_AR11021EH21JAB        -      -      2     25   110K   246K
    ata-HGST_HUS726060ALA640_AR31051EJSAY0J        -      -      1     24   107K   244K
  raidz1                                        689G  21.2T      2    116  22.2K  1.28M
    ata-HGST_HUS726060ALA640_AR11021EH1XAPB        -      -      0     29  5.83K   330K
    ata-HGST_HUS726060ALA640_AR31021EH1RNNC        -      -      0     28  5.53K   327K
    ata-HGST_HUS726060ALA640_AR31021EH1TRKC        -      -      0     29  5.64K   329K
    ata-HGST_HUS726060ALA640_AR31051EJS7UEJ        -      -      0     29  5.20K   327K
logs                                               -      -      -      -      -      -
  3486798806301186697                              0  5.50G      0      0      0      0
---------------------------------------------  -----  -----  -----  -----  -----  -----
opool                                          4.18T  1.28T    477     43   278M  1.18M
  mirror                                       4.18T  1.28T    477     43   278M  1.18M
    scsi-35000cca2556d51f4                         -      -    156     21   139M   605K
    scsi-35000cca2556e97a8                         -      -    320     21   139M   603K
---------------------------------------------  -----  -----  -----  -----  -----  -----
qin                                             190G  5.25T      1     91  16.0K  1.40M
  mirror                                       95.2G  2.63T      0     44  8.77K   707K
    ata-TOSHIBA_DT01ACA300_Z252JW8AS               -      -      0     22  3.63K   354K
    ata-TOSHIBA_DT01ACA300_99QJHASCS               -      -      0     22  5.14K   354K
  mirror                                       95.2G  2.63T      0     47  7.21K   730K
    ata-TOSHIBA_DT01ACA300_99PGNAYCS               -      -      0     23  4.49K   365K
    ata-TOSHIBA_DT01ACA300_531RH5DGS               -      -      0     23  2.72K   365K
---------------------------------------------  -----  -----  -----  -----  -----  -----
rpool                                          89.3G  49.7G      0     40  13.1K   637K
  ata-OCZ-AGILITY3_OCZ-B8LCS0WQ7Z7Q89B6-part3  89.3G  49.7G      0     40  13.1K   637K
---------------------------------------------  -----  -----  -----  -----  -----  -----

opool is back as scrubbing again… and most likely failing again… for the 5 or 6th time, last time after 14+ scrubs it stopped acting up… but since it’s been running a storagenode for 3 months now, the drive has begun to be complaining again.
the SMART also says its dying… so its most likely dying…

Proxmox is very rude to its install disk by way of logging- a USB will not survive more than a few months.

Also about opool repeatedly scrubbing, it might be that the disk is dropping out and coming back enough to throw it right into a scrub/resilver- but you’ve been fighting that uphill battle with that drive for a few months now.

yeah its been acting up since the server was on a table where i used a hammer :smiley:
gently and around that time bumped the table… been cranky ever since…

the two sas drives was a part of my original pool, but sas and sata doesn’t play well together i found out…
then it ended up as a mirror disk for my streaming server and performed quite well for a few months…
until i decided it was doing so well and scrubbing perfectly so it would be fine to drop a storage node on it…

didn’t like that one bit, so i guess it’s going back to being a mostly idle storage media, seems to work fine for that… besides for a mirror i don’t need it to be perfect… just be right in case the other one throws a bit error, and it seems to be able to do that.

bought the two sas drives cheap on ebay from a private seller
2 for the price of one… :smiley: he claimed that he bought the wrong drives… but yeah… i got scammed… it kinda works tho… when it’s not being written on to much… lol

i started the scrubs on opool most of the time… was basically just testing to see if zfs would figure out a way to make it work eventually. or if it would be a waste of time to do the same process on some other stupid drive in the future.

tried to pull the drive today… from opool… picked the wrong one… the system didn’t seem to care tho, ofc i wasn’t sure so it was just like a second to see if it was the right one, for some reason, some of my activity leds aren’t acting right and i haven’t been able to figure out how to send the command that makes them blink to tell me which disks to pull… as you can see they are locked by uuid or whatever its called serial’s … so i could just turn the server off a check… but figure it would be a fun test on a active array with no slog, and an active storagenode… seemed fine lol

so it’s going to get a scrub, then i’m going to pull the correct disk… i hope … maybe tomorrow, try the trick i did on the last one that was acting up, which seemed to be a problem with the disk pcb and caddie’s being a bad match… if that doesn’t solve it, then it’s back to being just an mostly read disk… seems to be continual work and writes that makes it cranky.

ill keep that in mind, i really should solve the OS (rpool) issue because i got like 6-8 VM + the OS running off it… and it’s not to happy about it any more… initially i tried to have the vm’s in other pools… but migrating and changing stuff wasn’t great for it… i should start looking at all that fancy replication stuff i guess… i suppose that would solve the problem if the vm’s where mostly just a backup (replicate) on the OS pool and then i could put them back on my main pool (bitlake) or Qin maybe which would have the best raw iops…

but next up i think is to find a way to make my l2arc span all the pools instead of just one… if that’s even possible, since it’s offline it make sense to work on that… after the obstinate sas hdd…

and the l2arc comes before the vm source move because it would help support their iops.
it’s a damn puzzle sometimes just to figure out how to move forward without messing up and creating more work that should have been in the future.

Generally, on the same channel/port, no. When I build out JBOD’s and need to mix (which I try to avoid like the plague), I use interposers…but that is also due to the JBOD’s usually having backplanes that are expander style rather than passthrough (ie, 1 SFF-8087 serving 24 bays vs 6 SFF-8087 serving 4 bays each).

This may be due to the controller, IIRC. Usually some of the more purpose built systems (HPE/DELL/IBM-Lenovo) like to use sideband ports to talk to the expander board out front and tell what ports to blink. It’s the same thing where people that have the HPE trays that are 4 part rings (thing like Xbox one’s power ring) they won’t get them to display the progressing clockwise light cycling without the right software and controller running together.

Partitioning it, that’s about the only way. What I do is take a S3700/S3710 per pool if I want to go on the cheap. If I’m going for SMB, I bump up to a DC P4610 1.6TB and partition it to two 2 800GB if the pools are small enough that the system only has 386GB or less in RAM (single partition if its 512GB or more RAM). Enterprise I try to move to Ceph and Optane holding the WAL/RocksDB with 10 or more nodes on 40GB or 100GB back-network. The idea with ZFS’ L2ARC is to create something that is faster than primary storage, doesn’t need to be as fast as RAM, and at a capacity that just isn’t cost effective to have as RAM.

the way i found it described was that they wasn’t allowed to be the in same array… but i got a couple of empty bays, and each of the SFF-8087 ports from a HBA supports 4 disks… which seems to be fairly common, so i put them there and made their own pool just to try it out and it worked like a charm… pretty sure where i found the answer they said separate array’s but i guess that’s might also be a matter of interpretation, he may have meant it like you say it… i duno…

i know it’s works and the errors i got from having sas and sata mixed was just jumping around giving me random errors at random disks… was hell to troubleshoot, ended up in just trying stuff i hadn’t thought about… basically took everything apart and tested basically every setting in the bios…
because after a while i realized it wasn’t an isolated problem… it would case read and write errors, but move two disk around and a 3rd might stop getting errors and it would jump to a 4th disk instead…

quite interesting and tricky to solve, when i realized it wasn’t my 2nd hand drives that all where bad…
actually ended up buying new drives just to because i initially just through… HA thats what you get for buying 2nd hand drives you fool… :smiley:

i don’t think there is much elaborate about it… okay maybe a little bit… i have had the backplanes… 3x 4 port on 3 different controllers, all of them LSI tho… 4 if i count the current dual HBA cards as 2, but essentially they are each the 3 for the backplane they control… most likely have been switched around tho, but they have the same generation of the chip, so essentially one.

also think they worked fine before i installed proxmox… when i installed linux i lost my IPMI, my led’s in the trays got all weird and i couldn’t go in through software and easily tell the software to tell me which disks to take out…

i like the concept of linux, and it’s nice that i can really troubleshoot and modify stuff… still kinda janky… but in the past linux never survived me more than a couple of weeks… so atleast it’s more robust when people try to tear into it…
not sure i had all the activity led’s tested back then tho… so one or two of them might simply be broken… there is also that weird sata sas thing with the led’s… for some reason most or many servers seem to use a different method for lighting the leds when using sata and sas … so like the power light won’t be on, with the sata drives, but they will with the sas… kinda weird… seen it in some server related videos… never really got a good answer for why that seems to be a thing.

i like the DC P4610 it’s a very nice drive was considering that when i ended up buying my IoMemory, but i kinda wanted it to be over PCIe to save ports, and it had some of the lowest latency i could find and was able to do parallel data streams… apparently it’s made to be a swap drive… so i could just go that route i guess… not sure how that would behave with zfs tho…

only problem with that is that i’m running 512B sector size hardware on my main pool because from my initial testing with storj it looked a bit heavy on the iops and 4Kn was generally more expensive on the 2nd hand market… and couldn’t find a great reason for doing 4Kn aside from slightly higher throughput at very high data transfer rates… which seems mildly irrelevant… turns out the reason people like 4Kn over 512B is that it takes 8 times the memory to manage the space allocation tables for something like a swap drive…
and tho people only claim it shouldn’t matter, i have been unable to run my L2ARC and SLOG PCIe SSD in 4Kn, so was forced to reformat it’s OS to 512B sector sizes, and since then it’s worked like a charm until it threw the drivers… because they where custom no doubt… i kinda knew that sort of stuff would happen when i installed them… but it was all i could find for debian… bought it and checked it could do linux… and it was like windows, centos, rhel… debian… turned out it was like debian 8 or 9 almost ended up in me bailing on proxmox and going to … the other major enterprise linux that isn’t exactly a hypervisor type deal… the name escapes me right now. SUSE!!! ofc

it sounds really good, ofc one would want the enterprise version… so and again even OpenSUSE or whatever it’s called, i wasn’t sure i could get drives and was just starting to find my footing with proxmox and debian… so the reason one place said debian buster was because there where open source drivers based on a competent rewrite of the originals, just with lots of older hardware comparability removed, because nobody uses a Fusion IoDrive Duo or such anymore… even if their performance are amazing, the hundreds of watt’s of power draw is just insane for something a modern ssd can get close to for 10watts or 5watts

so they cut all that out of the drivers which makes good sense, and the drivers are great… they just forgot to make them so they would survive a kernel update… woops, but off all the mistakes they could have made i’m very impressed… i’m not even using a card they the drivers on… and worked flawlessly.
maybe… have had some weird output from the drive info for netdata monitoring…

2000 years latency or 0 seems to be the states it will do… lol and transfering 80 TB/s
i suppose that could be related to the drivers, but it’s been the only thing i’ve seen…
i got 48GB ram and then i just like the L2ARC to support that, it would just make so much more sense with my 1.6TB FusionIo2 to be shared across them all, so i use like 1TB for L2ARC.

i did consider Optane but they where either to small, to expensive, to slow at writing (because i wanted to be able to support 10gbit nic, thus the L2ARC / SLOG device would need to do atleast 1.2GB /s sustained and preferably more.

Optane aside from the models near or past 1000$ the P900 and P910 i think they are called
their speeds are good and their iops are great… epic not the best, but like 2nd place i think, but they have their durability… not sure about the RAM versions of them… didn’t look at those because they wouldn’t work with my old hardware.
and then there is the whole PCIe 2.0 system running with a PCIe3.0 card
not exactly the way one wants to go and the SSD not being cheap i didn’t want to risk incompatibility, they say it will usually work. but sometimes it can cause problems… the manual for the SSD tells me to go into the BIOS and turn of my CPU power management, because it might be a bottleneck for the SSD if the CPU sleeps to deeply… lol it’s so fast that when it’s running the computation of the CPU is measurable faster because it directly connects to the cpu and expands it’s Caching abilities or something like that.

and it will do sustained read writes of 1GB/s and 100k iops + both read and writes sustained
i know it’s old, but it was the optane of it’s day… optane is just a bit more reliable, gets lower latency…
but their capacity just isn’t there.

initially i wanted to do one big pool, but since we were told that we had to use separate storage for each node, it changed a bit… and running vm’s on the big pool didn’t work to great… but at the time i was rebooting a lot more… so that was also part of the problem.
also want to move towards Ceph eventually… but that’s so far away for my setup that i’m not thinking of that yet.
and running sync always on zfs is also rather IO heavy, but seems to be worth it tho, imo

didn’t really think it was going to end up being this big a project… but atleast it’s starting to work…

just got PfSense up and running… running in a VM with Emulated NIC’s, works like a charm… was a bit worried about the latency maybe being bad… but seems fine… get like 9-11 ms when doing speedtests… have some odd network speed limitations tho… i seem unable to get my full 1gbit for some reason…
and i cannot figure out why… i get like 300mbit through my new switch …(much later)
turns out that trunks / STP is pretty important when making a multi switch vlan network. :smiley:

they really have to many acronyms in networking when the start reusing them… i thought STP was shielded twisted pair, but apparently not when it comes to vlan
i suppose thats why they call it trunks, ofc in my new switch one can also trunk ports together which is not the vlan trunking… O.o on a managed vlan able switch … makes perfect sense.
atleast i found the issue… finally

This is for hardware RAID. I see no reason why a SAS, SCSI, SATA and IDE drives could not be combined in the same zfs pool. The performance would probably be lower because of the slower drives, but it should work OK otherwise.

I don’t know, Linux works well in my servers and a lot of servers belonging to other people.

It’s also spanning tree protocol. You do not really need it unless you want redundancy (having multiple switches connected in a loop and such).