Zfs discussions

after lots of performance testing i ended up running SLOG only (maybe if i had a better ssd this would be the case) and the sync settings to all, so that it always utilizes my ssd for incoming writes, thus it can ack at minimal speeds, it seemed what running async actually affected the data flow from the “satellites” / network…

this seems to have reduced my latency even further… slowly clawing my way up to near 90% successrate of uploads… xD

 ========== AUDIT ==============
Critically failed:     0
Critical Fail Rate:    0.000%
Recoverable failed:    0
Recoverable Fail Rate: 0.000%
Successful:            138
Success Rate:          100.000%
========== DOWNLOAD ===========
Failed:                13
Fail Rate:             0.394%
Canceled:              5
Cancel Rate:           0.151%
Successful:            3283
Success Rate:          99.455%
========== UPLOAD =============
Rejected:              1
Acceptance Rate:       99.996%
---------- accepted -----------
Failed:                0
Fail Rate:             0.000%
Canceled:              3605
Cancel Rate:           14.742%
Successful:            20849
Success Rate:          85.258%
========== REPAIR DOWNLOAD ====
Failed:                0
Fail Rate:             0.000%
Canceled:              0
Cancel Rate:           0.000%
Successful:            3
Success Rate:          100.000%
========== REPAIR UPLOAD ======
Failed:                0
Fail Rate:             0.000%
Canceled:              70
Cancel Rate:           14.199%
Successful:            423
Success Rate:          85.801%
========== DELETE =============
Failed:                0
Fail Rate:             0.000%
Successful:            543
Success Rate:          100.000%

short log, had to rerun my run command this morning as my rsync of the storagenode finally finished, only took since 28th… lol but didn’t seem to affect the storagenode while it was running, so that was good… not by much anyways… tho my successrates was slightly lower…

so i should finally have my entire storagenode on zle (zero length expansion) compression with 256k recordsizes, and a factor of 1.01x but the system seems to run better on zle rather than lz4 duno why… i’m guessing it’s endless attempt at compressing encrypted data is not worth it.
so basically i sacrifice a slight bit of disk space like 2% less space required when using lz4, but i gain lower latency which pushes up my successrates, or so i think… xD

while i’ve been waiting for the damn rsync to finish made an epic cron storagenode log setup…

might seem rather simple to veteran linux users, but have been looking for something like this for docker, so i just made it myself… and now i can keel my node without even dropping a line in my log ever… xD and i can run docker logs storagenode --follow

so all is right with the world… then i created a logs dataset and tried dedup(utterly disappointing) ended up using gzip-9 on the log folder for a 9.8x log compress using 1mb recordsize… the 1mb recordsize didn’t give much… but i’m so close to 10x lol, if i could squeeze more out i would…

expand it for details on how it work, it’s well documented.
and ofc critic is always welcome.

oh yeah and i got my hdd fixed, ended up finding it was a problem mainly with it’s buffer/cache so i turned it off, now it’s been running for days with only 2 read errors… xD and i did pull my OS drive which i had forgotten to put on the UUID type zfs integration, so had to turn off the machine while it was running… zfs didn’t care tho… was running standart at the time… tho… so there is that caveat
running async didn’t seem to cause any damage, only thing it does is making my webguis disconnect over extended periods.
so one could do that for performance reasons… and i did for moving the files faster… took over a week if i didn’t… copying between two datasets on the same array of drives is rough.