This makes buffer in RAM 4MB for every connection you have, download piece and write in one chunk to hdd, default is 128k so it will write to temp folder till pice is written then copy it to needed location. but be carefully if you have node with 100 connections then it will be 400 MB reserved. so if you have lot of nodes make math.
ok it turns out that this command loaded config from the local permanent storagenode folder, so all I had to do is edit it directly in /var/local/storagenode, stop and remove container and re run it.
@archimede91
What you set in docker run command, superseds the settings in config.yaml, for the running container, but dosen’t changes the config file. From the limited linux knowledge I have, I see that you check the setting in config.yaml, which is not changed, because you didn’t edited it with Notepad++ or Nano or something, and ofcourse you will get the default value. But the running container uses the corect value, 4MiB because it takes it from docker run.
BTW, you don’t have to edit anything in config.yaml for docker storagenodes. Just ignore it. Everything you want to set, put it in docker run command.
There are two (technically three) ways of configuring your node for docker:
config.yaml
run parameters
If you specify something in run parameters, it will be used. If not, it will fallback to config.yaml for the value. If the value is not set in config.yaml either, some internal default will be used.
No, it should not. No one argument option will change the config.yaml file. It is generated only during setup (or it could be updated during migration to a new version).
The options provided as an arguments will override options from the config.yaml file during runtime only.
I found that changing write buffer size leads some sort memory leaks in 1.76.2. Containers begin to consume RAM like crazy, eating 6-8Gb each. Setting it to default solves this issue.