iowait / disk latency is usually the cause… doesn’t take much… even 20% iowait on avg can cause stuff like this… the best way i’ve found to identify problems is looking at the individual disk latency.
from what i’ve seen, been testing this out a bit… the ram usage is allocated by the storagenode for caching, it will take a while for the RAM utilization to go down…
and even tho i try to improve it, then sometimes it will demand a significant amount of memory, however some people run with very low memory amounts, so it might be possible that the storagenode will find another way if it doesn’t have extra memory to use…
i wouldn’t recommend that tho, but that’s more personal preference, cannot say if it actually can do any harm… i’ve been tracking this kind of behavior over 2-3 months and it still keep happening… may also be related to node activity… it seems to come and go, and for me it’s pretty irrelevant as my server has plenty of memory to allocate if need be.
however it does seem to be closely linked to disk activity and storj recommends 1gb pr node, even if it only uses 50-150mb for atleast 95% of the operational time.
not really a big surprise that the storagenode requires memory… are you running 512B sector sizes…
my SSD which is designed as a memory swap drive has some interesting facts about the differences between 512B sector’s vs 4K sector sizes on the hdd / ssd.
it is required to keep a table to keep track of the data, and for 512B sector sizes (which is what i use) if i was using it for swap it would required something like 24GB RAM to keep track of it, while with 4K sector sizes the required RAM would only be 3GB…
something similar might happen with the storagenode, if it’s keeping track of the files location on the hdd, these tables might also take 8 times the memory usage for 512B hdd’s…
which may explain why my storagenode at times goes above 1gb in utilization.
but the hdd’s i got only supports 512B sectors.
ofc it’s not all disadvantage with running 512B, since the possible iops on smaller data blocks is much higher, where 4k would only be able to do 1/8th of the same iops if the data blocks are less than 512B.
not that i have any use for this i think…
anyways it might be an interesting comparison to do.