xD well the rsync commands like human, readable, progress and such will be an awesome addition… for now i’ve been relying on netdata to tell me a gross estimate of how far i am… look like i got less than 14 hours left… of my way way to long rsync transfer, and i wasn’t running lz4, so i cannot speak to if the 1m recordsize does much… but it would sort of make sense that larger segments of data can compress more, but i duno… very green in all this zfs stuff…
been trying to figure out why my zfs has such a riddiculous transfer speed between datasets…
i could upload it to another computer on the network and transfer it back faster than how long it takes to move between two datasets on the same array…
apperently it might have something to do with disk sector sizes, random io of my vdev consisting of 5 drives, async, and blocksize not to be confused with record size lol… makes my head spin just trying to figure out whats causing the slow transfer…
alas the server is running pretty great compared to what i would have expected… i’ve been moving out a tb worth of data from the pool at 40-50 mb a sec, then i decided why not fire up my local mediaserver and tested that doing transcoding streaming at 1080p while storj, network transfers + my netdata monitoring and everything just ran flawless… sure there is a bit of a delay as the ARC figures out what i’m doing… but runs close to good… got my NUMA setup pretty good now i think, that took away my issue with streaming while putting load on the servers network.
But i optimized so many things by now, that i have no clue about which was really to deciding factors.
i can see from my netdata that its my ssd L2ARC/Slog / OS drive that is being stressed.
i really need to migrate the OS away from that… kinda want to move it to the zfs pool… but that comes with it’s own can of worms… like the issue that the server cmos boot sequence likes to prefer a drive and don’t like to boot on others, so if that drive dies on the zfs pool, then boot sort of dies with it… afaik…
so the solution might be to either use an old hybrid drive i got or a 128gb ssd for an OS boot drive…
i think part of my issue of over taxing the SSD is that because it’s two different partitions, then some things will end up being double or triple duty, first OS swap ( now disabled) the it moves it to the zfs pool which makes it go into the zil or slog write cache and then it from there goes into the arc…
or something like that… thus i end up with like 6+ times the io for one process designed to spead the io over multiple devices…
but it works and the iowait isn’t insane and the storagenode doesn’t seem to really be affected even with all my abuse… a bit surprising… only lost like 1.5% in successrate… but am pretty close to 75% now… makes me feel bad lol when im pretty sure i can get to 85% maybe even touch 90%
but been getting 4mb/s from storj pretty consistent most of the day… which is almost more than i wanted because i’m quickly running my head into the wall of what the pool can contain… but should just barely get below the max… with some 5% free for storj db and transfers until i get the old primary node rsync completed, rechecked while offline and if its an issue i did the math and make sure it will happen tomorrow afternoon… so can deal with it if it comes to close to cause issues… xD
scary that i actually enjoy this … going to go through my cmos / bios on next reboot and using original intel documentation on my cpu’s to try and optimize the server some more… i doubt ill ever go back to consumer gear … this old machine is amazing.
1 decade old and it still beats most consumer gear today… lol WTF… kinda makes me want to try a SPARC.
does these do what i think they do…
–delete -avHAXx
and why avHAXx better than aHAX
i suppose i could look that up myself…