Storage node memory consumption grow

I’ve set up a storage node up and running since 1 week but I’ve realized via telegraf monitoring that memory consumption has grown day by day till saturate the capacity.

After stop and restart the node everything back to normal. As anybody already experimented this?

Hi @sinaure
I think you should look deeper into memory consumption with:
ps -A --sort -rss -o comm,pmem,rss | head -n 20

Here is my output (uptime from the last automatic update):

COMMAND         %MEM   RSS
storagenode      2.9 122188
systemd-journal  1.9 82584
telegraf         0.7 33084
storagenode-upd  0.3 15888
systemd          0.2 10092
mc               0.2  8960
systemd          0.1  8360
sshd             0.1  8056
sshd             0.1  8000
systemd-logind   0.1  7192
pickup           0.1  7140
sshd             0.1  7116
qmgr             0.1  6864
tmux: server     0.1  6136
dhclient         0.1  5408
bash             0.1  4992
bash             0.1  4488
bash             0.1  4480
bash             0.1  4468

I just show you my chart from the last 7 days:

1 Like

memory usage may spike a bit from time to time, but from what i’ve seen from tests lacking memory just limits performance,
i think the highest memory usage i’ve seen was around 3 GB on a single node, but normally its like 200mb avg over months.

so long as you have something like 1GB free memory for the node it shouldn’t be a problem, high memory usage can also becaused by disk latency, if the system cannot write incoming data to disk fast enough, then it will be stored in memory.


It may be possible the NFS mount cause this problem ? I read about bad performances
By now:

Server (where docker is running) is LAN connected to router
NAS is LAN connected to router
Server and NAS are not directly connected via LAN indeed

Here I explain how I set up the traffic between the NAS and the server:

We do not support network attached storage, in general the storagenode is incompatible with any network attached storage, especially NFS and SMB.
The only working protocol is iSCSI, but even then your node will lose race for pieces to nodes with direct attached drives.
You can take a look on other problems with NFS: Topics tagged nfs
Sooner or later your node will stop to work.

1 Like

Thanks for the support after switch to LUN mount ISCSI is working great I have updated my tutorial for info

1 Like

you might be able to patch it by doing the database move to an ssd… then if that isn’t on the network drive it should in theory be less of a problem… not saying it will fix anything…

but something i might try, but i’m crazy so most likely don’t do that lol
just saying it could help lol if nothing else there are others that will be interested in how well it works