One Node is using more RAM

Hey there,
one of my nodes is using 2,7 GB Ram. I have no i/o wait and also no other problems.
The node is using 2,3 TB space and is healthy.

How can i troubleshot this?

its usually disk latency or lack of cpu…

The Docker Container is running about 26h now and consumes about 3,6 GB Ram.

I had a similar issue some days ago.

Which file system is the node on and with which OS are you using it?

Im using Unraid and xfs as Filesystem.
I’m running two additional nodes with the same system and without problems.
maybe its the disk :confused:
EDIT: now the Ram is down to 900mb…

Find in config.yaml

# in-memory buffer for uploads
filestore.write-buffer-size: 4.0 MiB

Replace it to 4.0 MiB or lower.

Ok. I did that. in my config it was 128KB but

# filestore.write-buffer-size: 128 KB

You need to remove the # character at the beginning of the line for the setting to take effect.

i did. just wanted to show my old config :stuck_out_tongue:
EDIT:
After restarting the container, the ram usage runs up to 3.6 GB.
Maybe its the filewalker?

Do you use docker desktop on windows?

nope →

stuff like the filewalker will also increase ram usage, it does seem a bit high… but if it just happens once in a while i wouldn’t worry to much about it, so long as you have plenty of ram ofc.

xfs is fast but also old, you should avoid using it imho… but yes it does seem to give slightly better performance in some cases, but its a bit less clear cut if one digs into the details of it.

i doubt xfs is to blame tho…
basically ram is used when async ingress doesn’t get written to disk immediately and starts to accumulate in memory, which like mentioned before happens due to lack of disk speed or processing speed.

you can easily test these kinds of behaviors in a vm when limiting its resources…

ofc there can be other edge cases, filewalker also can cause it, but that might also just be down to it slowing the disk responsiveness down.

6TB node… not a very well behaved one… and usually its about like a few hundred mb or less of ram usage
but the proxmox monthly max graph tells a very different story…

and yet the avg graph tells another story

and i know some people will say its using to much and show their numbers from inside docker, but docker doesn’t account for the host OS filesystem memory usage… so yes i also get the 50-100mb number inside docker

1 Like

Considering the default value seems pretty low already (128kib), putting a higher value than that might actually make nodes use more RAM, right?

That would have been my guess too. But if @padso has no io/wait or other issue (cpu usage etc.) then that’s weird :thinking:
I never see such RAM usage on my Rpi4B hosting several nodes, even after tens of days.

Might depend on the nature of the problem. Small buffer → real I/O needed more often to dump small chunks of pieces → more I/O operations needed for each piece → triggering a bottleneck on IOPS → more memory taken by concurrent uploads waiting on I/O. This is how my node hosted on btrfs reduced memory usage after increasing buffer size.

I find it unlikely in this specific case though:

2 Likes

Right. If set to 128 KiB, the node will need 128 KiB of RAM to cache each Piece be downloading by the node. Note is for each Piece. If your node is downloading 100 Piece => 128 * 100 = 12800 KiB of RAM will be used.

1 Like

The RAM is down to 1.5 GB but i monitored the cpu load and its hopping up and down. realy strange behavior …

If filestore.write-buffer-size was set to zero would that just leave buffering to the filesystem or OS instead of the application layer?

Not really, unless the application is written specifically to support this case (with syscalls like splice()). It’s way easier to just assume there is always some form of an application buffer, read() into this buffer, then write() this buffer into the file.

Du hättest EXT4 als Dateisystem nehmen sollen. Das kann unRAID als solches auch formatieren, nur dann per Konsole.

Habe auch einen unRAID-Server mit einigen Knoten und Festplatten darauf am Laufen und hatte ein ähnliches Problem als das Dateisystem NTFS bei den Festplatten war. Nun sieht es sehr gut aus, aber die Knoten sind noch klein und die Probleme kommen erst wenn die Knoten >500 GB groß sind.

After a few days the ram usage returned to a normal value. idk what happend here ^^

@Walter1
Bei dir lag es aber eher an NTFS als an xfs. NTFS ist einfach unglĂĽcklich fĂĽr Linux/Unix.

1 Like