well i’m at 1/3 full after 10 weeks only started out with 6tb dedicated to the node, but it was growing with a decent pace so i decided to give it some room to grow… good to know it’s only a recommendation…
i did catch that 8tb was a recommendation tho… i just figured when it said max then it was the max…
will most likely split it up for logistical reasons anyways… but again… it’s most likely all going to live in one big zfs pool… so from a redundancy stand point if i run two nodes or 10 nodes doesn’t really make any difference, if the pool dies they all die…
this last few weeks have been slow going… but it gives me time to experiment and test a bit.
just managed to copy 1mil files internally in less than 1minutes on the storage pool … got my hba’s moved to only be on my northbridge, my slog and l2 arc ssd has gotten split so they now each have their own 3gbit sata controller, and switched from really old sata cables for the ssd’s to 6gbit ones.
woop woop imma gonna get a ticket… still run into some weird 50mb/s limit on the network access to it… but seems like it must be the nfs server i use for local file sharing from the pool to my vm’s
also managed to do 1.14gbyte/s nearly sustained reads during my scrub last night
the storagenode is baremetal
well the rational behind having 1 or 2 big pools rather than many smaller is that any one storagenode then will have full bandwidth and io of the pool if needed, also allows me to focus more on having some decent redundancy without it taking away to much of the capacity.
and it will allow me to use the pool locally for other network related stuff and experiments, like trying to run my workstation directly off the pool instead of having local harddrives.
long term i plan to build it into a cluster, so that no matter what breaks the node will survive, maybe even get a second internet connection for redundancy, if this ends up being profitable.