Machine freezing for short periods

Well i run zfs on a 5 disk raidz array, with a ssd l2arc… today my ssd only a regular sata model ran at 5-6% load most of the day, avg with avg response time of 3-4ms, but sometimes it has sizable spikes 10-100ms backlog according to netdata.

any often used, such as a database would be in my case loaded and saved to either ram, ssd, takes maybe days before my cache is filled after a server reboot.

my regular drives had 2% avg load today, ofc thats split between 5, and i think i’m running without my hdd disk cache’s on, haven’t gotten my raid control switch out for a HBA yet so i just ran it straight through.

but really 6-7% avg utilization over the better part of a day on my ssd
and that’s not to speak of what the ram managed to deal with.
and thats only running 1 storj node… O.o

not that i’m complaining… been getting 3.5 to 4mb a sec on ingress today
that’s a nice start and to fix the ssd performance issues, i might just find an old optane drive to throw in a spare pcie port…

seems a bit to me like the main thing that gives good successrate is low disk latency.
and if i’m at 100ms ssd disk latency at times, then some uploads is over before my disk even can respond.

but yeah performance optimization of storj wouldn’t be a bad thing… i think there is a lot of head room to gain there. but really got no clue, not a programmer, more of a network / systems builder.

i digress…
so long story short… i seriously doubt its a database issue… :smiley: ^

apperently not the only one feeling a bit of pressure my dual xeon’s x5630 are only at 2% avg
but i did disable zfs compression and also why i only run raidz1(cheaper on the checksum / parity)
zfs compress on storj data is a waste of time, even a gzip-9 (highest possible compression) just gave my cpu a headache and didn’t seem to indicate much capacity saved if any… but want to retest that in a better way.

wouldn’t surprise me if they are all running software arrays with features they cannot support in any meaning full quality.