it’s 14 million files, or so, which makes them pretty small but ofc they can always be smaller…
got a program that has like 1.2mil files which takes up like a few hundred mb…
openNMS used for keeping an eye on my network gear and connections, but it a bit enterprise level software, so don’t expect it to be consumer friendly.
i don’t think anything gets worse with large query depths, because it makes the system able to predict where it needs to go, and thus it can start to optimize using NCQ and such
with my writing from 3x raidz1 of 3 disks each to 3 disk in one span… yes i know… not recommended and its a little risky… but i’m sure it will be fine… so long as this doesn’t take like ,… ages… i’m predicting it to 9 days and hopefully ill be fine for those 9 days.
less than an hour to my scrub is done, then … oh i just realized why i cannot go past 300-400mb the drives i’m writing to are only 3 so x3 speed… which is maybe 300+ at best…
DOH
because of me using 3x disks or raidz1 i get 3 times the iops
even tho and with the storagenode running… iops being the limitation for my migration
max transfer speeds i saw was 42mb/s and minimum was 18mb/s
@serger001
i like the idea, but wouldn’t that just make one pay all the time instead of only during a migration?
files are split up for a reason on storage, so that the system can better manage it and more easily access it, save on caching, ram and loading times…
sure it should in theory go much faster with 1 big file… maybe ill do my next storagenode as one of those… could be very useful
in my new setup i will have a 6 drive pool of mirrors, which would give me 3 times the iops i currently have, ofc thats reads… still mirrors basically get read like single drives, which is basically optimal and then write is ofc ½ because one writes to both…
but still thats like twice or more times even moderately sized and in cases of like 6 or 8 drive raidz … its 6 - 8 times the iops… ofc write is ONLY 4 times the iops… but you get the idea…
would mean the 6 drives i will use for my mirror pool would have … lets call it 300% hdd iops write, and thus my 2 x raidz1 of 4 drives each so a total of 8 and it will have 200% hdd iops because of it being 2x raidz1… so 6 drives in mirrors vs 8 in raidz1’s which optimizes the iops they can get … and so the mirrors win by 50% better iops which is a lot when one is talking about copy speeds…
when iops bound
so that should be interesting…