@Vortal and @SlavikCA you are awsome! I will try and report the findings. @SlavikCA you should set the net.core.rmem_max and net.ipv4.tcp_fastopen to run as script, as root, on startup, to avoid loosing them on DSM upgrade.
@Alexey and @SlavikCA - So in Docker run command, I don’t have to use the --sysctl net.ipv4.tcp_fastopen=3 \ flag, after setting network host mode?
Can I use this port: --server.private-address="127.0.0.1:14021"?
Thanks a lot guys; it works! I will let one node with no TCP_fastopen to see the difference in traffic. I’ll update my run command post with the new modifications. After 31 min, I got this:
On my server I switched the containers to host networking and now I’m also finally receiving fastopen connections. Before it wasn’t working, although it was saying that it is configured in the node logs.
What resources consumes TCP fastopen more than normal TCP and QUICK? Does it take up more RAM or CPU cycles on the router? Or on the host? Should we keep an eye on what?
Are those cookies pile up? Where? Do they ever get removed?
They are known services. You can fuck up badly if you pick the wrong one. Normaly you stay away from that. But its not forbidden by anything to use one for different requests.
You risk getting spammed with the wrong services for your app incomming but at least if they are not used on the router, they work normal.
Eg: you can forward port 80 for vnc if you have no webserver behind the router. But if you ever set up an webserver it will not work.
Yes I read correctly thank you, I understood that there was more chance of winning download races, my question was that currently if we activate we have more luck… but if it is not activated are we at a disadvantage?
We use a customer’s point of view on the traffic, so downloads (from the Storj network) it’s an egress from your node, uploads (to the Storj network) is an ingress to your node.
TCP Fastopen affects both, because your node would response faster, so it will win races more often.
When the customer want to upload a file, their uplink requests 110 nodes for each segment of the file and start uploads in parallel, when the first 80 for each segment of the file are completed, all remained got cancelled. The same for downloads, but uplink will request 39 nodes contained pieces for each segment of the file and starts downloads in parallel, as soon as the first 29 for each segment of the file are completed, all remained got canceled. So the customer always uploads and downloads to/from the fastest nodes to their location.
I added net.ipv4.tcp_fastopen=3” in “/etc/sysctl.conf
Stopped container Rebooted removed container and reran command with
–sysctl net.ipv4.tcp_fastopen=3
getting result
$ sudo sysctl -a | grep “net.ipv4.tcp_fastopen =”
net.ipv4.tcp_fastopen = 3
Any logs commands I should run to verify its working properly?