Kernel support for server-side tcp fast open remains disabled

  1. this is a user ID and group ID, you may use --user $(id -u):(id -g) instead (it will substitute the user ID and group ID automatically)
  2. You do not need to use docker exec, only if you want :slight_smile: This command allows you to execute some command inside the docker container. It could be useful in some situation, for example, you can see a CLI dashboard:
docker exec -it storagenode ./dashboard.sh

(Ctrl-C to exit)
3. it’s not required, so I do not know, why is it used.

2 Likes

@Alexey
For the OS I used:

sysctl -w net.ipv4.tcp_fastopen=3

For the docker run I see that it misses -w. Is it ok without -w?

--sysctl net.ipv4.tcp_fastopen=3 \
  1. What’s with the --user 1000:1000 parameter? Do I must use that too? What does it do?

This is just part of my config to run the docker container as that particular user - 1000:1000 is the uid and gid of the pi user on a raspberry pi.

@Alexey 's soiution is more elegant as it runs the container as the user you’re currently logged into.

However, if you’re currently not setting the --user argument, please continue to not set it as it may well break your installation as the files ownerships would most probably be incorrect. Sames goes for if you’re using a different value - keep it as it is.

2 Likes

I’m on Synology NAS + Docker, imput as root (sudo su). I modifyed the run command and gives me an error:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: write sysctl key net.ipv4.tcp_fastopen: open /proc/sys/net/ipv4/tcp_fastopen: no such file or directory: unknown.

Where is the problem? This is my command:

docker run -d --restart unless-stopped --stop-timeout 300 \
	--sysctl net.ipv4.tcp_fastopen=3 \
	-p 28967:28967/tcp \
	-p 28967:28967/udp \
	-p 14002:14002 \
	-e WALLET="....." \
	-e EMAIL="....." \
	-e ADDRESS=".....:28967" \
	-e STORAGE="7TB" \
	--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
	--mount type=bind,source="/volume1/Storj/",destination=/app/config \
	--log-opt max-size=10m \
	--log-opt max-file=5 \
	--name storagenode storjlabs/storagenode:latest \
	--server.address=":28967" \
	--console.address=":14002" \
	--log.level=error \
	--storage2.piece-scan-on-startup=false
1 Like

I removed the node and restarted without that parameter, and than I run the exec command sugested (I undertand that it reads and shows the actual value of tcp_fast_open). It says 3… but also an error… ? ! ? !

 docker exec -it  storagenode cat /proc/sys/net/ipv4/tcp_fastopen

3
cat: /proc/sys/net/ipv4/tcp_fastopen: No such file or directory

this is a docker option, so they require this format. See docker run --help

seems Synology doesn’t support it in full

But why it shows “3”? It mirrors the value from the OS? Docker runs by default in bridge mode, not host mode. Maybe uses the value set in the OS and dosen’t need it set in docker run

the network connection doesn’t matter here, these values are kernel values. Just for docker you can set it independently in some cases.

For honestly I do not know, why it shows 3 and then print this error.

Maybe @BrightSilence has an ideea. He runs Synology also.

DSM (7.1.1-42962 Update 4) and Docker (20.10.3-1308). I’m on these versions since a few moths ago.

After NAS restart, the docker exec dosen’t show 3 anymore, only the error.

The log shows this at node start:

INFO server kernel support for tcp fast open unknown {"Process": "storagenode"}

It runs without the tcp_fastopen parameter in docker run command. Maybe I have this problem because I run the node as root?

It? :joy:

I’m not at my system atm. I will check some things tomorrow. I have the setting set on the host machine using a scheduled script at boot. Haven’t set anything in the docker run, but I’ll try to verify if that’s needed. Did you set TCP fast open to 3 before running your Docker container after restart?

@BrightSilence
:rofl::rofl::rofl::man_facepalming:t2: I was doing too many things for a free Sunday. Sorry… I corrected the mistake. When you are non native english speaker, it happens. … and with the speed at wich AI is developing, who knows in a few years who whould be at the orher end of the line :smile:.
Yes, I set the paramenter in the boot script, also in the sysctrl, and restarted. I checked it with that command and was showing 3.
Then stop container > remove > launch new container. And error… now is running without that parameter and it says that the state of tcp fastopen is unknown. I can’t tell if it’s working or not. That’s why it should have an indicator on the dashboard like QUICK has.
Thanks for any suggestions!

docker exec -it  storagenode cat /proc/sys/net/ipv4/tcp_fastopen
3
2023-04-03T18:44:20.577Z        INFO    server  existing kernel support for server-side tcp fast open detected  {"Process": "storagenode"}

Big thanks for idea to include --sysctl in docker run :wink:

Now… Does it change anything other than this message?

Now… Does it change anything other than this message?

Not that I’ve noticed yet.

netstat -s|grep Fast

Should give some output when a TFO connection has been made. I’ve done some tests and my router does pass TFO through correctly so I can only assume my node hasn’t seen a TFO request just yet.

Perhaps it would be appropriate to put a notice on the dashboard for the Fast Open. As for the Quick

Thans alot for your command.

Can you explain why you put

	--storage2.piece-scan-on-startup=false

into the command? I’m having this in the config.yaml.

Can you also explain what

	--log-opt max-file=5 \

does?

Thanks and kind regards,

@Walter1
The first one, if it’s false, stops the File Walker from running. If it’s true, FW runs at node start, and after any updates and restarts.
See post “Tuning the filewalker”.
The second one reffers to logs; it’s a log rotate mechanism. One line specify the max size of one file, and the other the number of files that are created and retained. New files created after this value will delete the old ones, to keep the number of log files at that value. A new log file is created when the max size is reached.
If I remember corectly, this only works if you didn’t changed the location of the log files from default.
This is the expanded version:

Take notice that I keep FW off for 2 nodes that are on a NAS with only 1GB RAM. For other, I have 10GB and 18GB RAM and FW is on.
There are also some commands that reffers to how data is buffered and write on disks, to reduce the fragmentation. I keep those as default on systems with low RAM. I use different Docker ports for each node, because I was getting Quick missconfigured when I was using the same port. Now I don’t get Quick error ever.

2 Likes