Storagenode Memory Utilization Comparison

Not sure if a thread akin to this already exists…

In recent time i had a bit of issues with my node, and now that it’s basically back to normal, my memory usage is slightly on the higher end of what i have been use to… so wanted to get an idea if this is due to higher activity levels at the moment…

and to see how different hardware, OS’s and node sizes affect memory utilization.

and there is no significant IOwait… nothing i would consider significant anyways…

Up to 400 MByte today according to netdata

No significant IOwait, showing in proxmox

did a zpool iostats -w 600
> tank total_wait disk_wait syncq_wait asyncq_wait

latency      read  write   read  write   read  write   read  write  scrub   trim
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
1ns             0      0      0      0      0      0      0      0      0      0
3ns             0      0      0      0      0      0      0      0      0      0
7ns             0      0      0      0      0      0      0      0      0      0
15ns            0      0      0      0      0      0      0      0      0      0
31ns            0      0      0      0      0      0      0      0      0      0
63ns            0      0      0      0      0      0      0      0      0      0
127ns           0      0      0      0      0      0      0      0      0      0
255ns           0      0      0      0      0      0      0      0      0      0
511ns           0      0      0      0      0      0      0      0      0      0
1us             0      0      0      0      0      0      0      0      0      0
2us             0      0      0      0      0      8      0      4      0      0
4us             0      0      0      0      1     32      0     23      0      0
8us             0      0      0      0      0      1      0      5      0      0
16us            0      0      0      0      0      0      0      1      0      0
32us            0      0      0      0      0      0      0      2      0      0
65us            0      1      0      1      0      0      0      3      0      0
131us           0     14      0     31      0      0      0      7      0      0
262us           0     40      0     85      0      0      0     15      0      0
524us           0     34      0     70      0      0      0     21      0      0
1ms             0     40      0     16      0      0      0     28      0      0
2ms             0     36      0      8      0      0      0     25      0      0
4ms             0     15      0      2      0      0      0     11      0      0
8ms             0     10      0      4      0      0      0      7      0      0
16ms            1     14      1      7      0      0      0     11      0      0
33ms            0     12      0      4      0      0      0      9      0      0
67ms            0      9      0      1      0      0      0      7      0      0
134ms           0      4      0      0      0      0      0      3      0      0
268ms           0      0      0      0      0      0      0      0      0      0
536ms           0      0      0      0      0      0      0      0      0      0
1s              0      0      0      0      0      0      0      0      0      0
2s              0      0      0      0      0      0      0      0      0      0
4s              0      0      0      0      0      0      0      0      0      0
8s              0      0      0      0      0      0      0      0      0      0
17s             0      0      0      0      0      0      0      0      0      0
34s             0      0      0      0      0      0      0      0      0      0
68s             0      0      0      0      0      0      0      0      0      0
137s            0      0      0      0      0      0      0      0      0      0
--------------------------------------------------------------------------------

got a few stragglers at the 134ms
i suppose those could be to blame… doesn’t seem high enough to make a significant impact
ofc it could be the reason for me seeing new numbers in the storagenode memory usage, or it’s something to do with the extra latency created on my now dual disk slog.

or the storagenode cache size change on a more long term scale and i’m still seeing some of the after effects of the issue i had a few days ago.

Don’t use dedup lol, i had decided to try out dedup on my VM dataset which shares the pool with the storagenode…

was a long time coming think i have it running for 3 weeks before the issues really started to show up, figured it was a good place to test because i knew i could reverse any ill effects but moving the fairly limited vm disks out of the pool and back in… tho didn’t seem like it was required.

my storagenode memory usage has now finally returned to normal…
first time i’ve seen numbers this low in maybe a week now since i had all my latency issues.
anyways wanted to make a bit of a record so that others might use it later…

from what i can tell it can take like a week until node memory usage drops down to normal levels again… or atleast that was how it seemed to go in my case…

if this is my last addition then it will have remained low for an extended period…
might do a 1 or 3 month follow up…

figured it would make sense to have a place where SNO’s could get an idea of what memory utilization they should expect, i sure was happy that i had a good deal of spare memory since i peaked at about 3gb cache on my storagenode.