Container using lots of RAM

Hi everyone,

Just to make sure, is it normal that my docker container storagenode needs more than 10GB RAM ? It’s been increasing of 2GB since yesterday. No wonder why it crashes the whole server sometimes :laughing:


Synology lies about how much ram container is using. It a bug in the ancient version of docker they are using.

Check actual memory usage with top or htop

The other way is to use a command

docker stats

Better :). Less than 1GB now

Thank you

1 Like

you can add

> --memory=2g

into your docker command so the memory will be limited to 2GB which is quite enough to run a node.

Don’t do that.

If your arrays has performance issue the node will get killed.

1 Like

I am doing this since years, never had any issues with a memory limit.

1 Like

I don’t know how it is for you, but for me if a container get killed then it will get restarted instantly so no problem. If you run more things on your node beside storj then you should limit the container memory usage. The code isn’t perfect so memory leak can happen, and if it does then lacking memory can hurt other processes running on the machine.

This means you have never reached the limit, and hence, it has no effect, might as well remove it.

How is it not a problem? Your node gets killed, so you miss out on the portion of the high traffic that caused it to exceed the memory usage in the first place. And then once traffic rebounds – you kill it again. Great. No problem at all.

If you node uses a lot or ram – you should not swipe the symptoms under the rug by killing (pun intended) the messenger; instead, you should improve performance of your disk array.

Let OS handle that, OS manages memory more nuanced than axe at 2GB threshold. Nodes are known to consume a lot of ram when disk performance is inadequate.

OS will handle that just fine. I could understand if you set memory threshold to, say, 150% of your RAM size – but 2GB is way too little.

You also likely are violating ToS by interfering with node operation.

1 Like

Sometimes you forced to use a limit, for example for raspberry pi3 with only 1GB of RAM, otherwise it will just hang when the memory pressure is happened and you would need to physically reset it.
So, yes, I’m agree that it will reduce a usage, but it’s a smaller evil, than reset a Raspberry Pi3, when you far away.
However, it’s not the case for Synology NAS and this parameter is not required.


I run 2 nodes on Synology 216+ with 1GB RAM. 2x 7TB, still filling, one almost full. It never hanged or restarted unexpectedly, the web interface is responsive enough. Sure I loose more races than other nodes, but is still pretty rewarding. I don’t know how raspi is doing with low memory, but Synology is pretty good. I never used memory limitation.

The Raspberry PI3 just hangs forever. But it’s likely because of swap either too small or disabled. For Synology it’s perhaps enabled and have enough room to grow.

1 Like