apperently when somebody moves a thread, they get the replys written when moving the thread directed at the mover of the thread @Alexey , are being written at the time…
and i cannot figure out how the change it.
the forum is a bit odd acting from time to time…
actually directed @littleskunk in changelog 1.4.2
i watched something quite interesting about optimization a while back… amazing stuff… ill see if i can find it, because you should really see that…
mind blowing.
sounds good that you guys are looking into that, it’s a real issue… i got 9 drives… trying to keep up with 1 node… and performance dropping if i look at it the wrong way… hold my byte…OH SO HEAVY
well double the piece halves the io basically… but thats not always a good thing… like in our zfs discussions… i’m no expert in the disk storage area… but it seems to me like there is something utterly wrong that i cannot squeeze through more pieces than what i am doing.
must be something with how its ordered and written out to disk… to be honest it does feel a bit like there is no caching or structure to it… ofc thats not really what one starts thinking about when one starts to build stuff like this…
maybe it just throws data around way to randomly… i mean… egress we don’t see a ton of… so thats not really killing the system and i got a slog / write cache
so why can’t i handle more than 5mb a second ingress…
pretty sure i could saturate my 1gb network connection in data transfers for weeks without it dropping much… even if i used the server for a ton of stuff… which is what is so damn weird…
i got 9 hdd’s 2vdevs raidz1 which load balances between them… i got a dedicated OS SSD / SLOG (write cache) i got another dedicated 750gb ssd for l2arc which takes care of anything repeating jobs so they will have 1-3ms latency, 600gb allocated for l2arc, 48gb ddr3 ram, dual 4 core 8 threaded xeon cpu’s… i optimized my recordsizes which granted seems to help a bit… i optimized my zfs best i can.
im running docker on baremetal, my storage is on baremetal same system…
granted i’m new on linux but i’ve been working with related stuff for decades…
my vm’s run real nice now… never had a faster system lol but i’ve been trying to optimize and get all of this working for two months.
and don’t get me wrong it works, pretty well… but if this is what it takes to just barely keep up… xD
in my opinion it has to be how its dealt with the data somewhere… something that does that hdd’s slow down… which is basically random io… not sure what i can do at Q1T1 but its not much above 10-20mb sustained. if that… so really thats what i would assume it is… not enough sequential write operations, so the program lets the host system figure out how to write stuff and read or something like that… i duno… my guess…
how difficult can it be to download 5mb/s and write it sequentially… or the database refuses to live in memory… but that should just move to my l2arc…with enough time… but then again the system did run fairly smooth the last time i got up to 3 days run time… but have had a lot of trouble lately… but think thats over now… finally… maybe i can get some sleep then lol
your turn to have sleepless nights i guess lol