That’s right, you should see the following message in your logs instead
INFO server existing kernel support for server-side tcp fast open detected {"process": "storagenode"}
That’s right, you should see the following message in your logs instead
INFO server existing kernel support for server-side tcp fast open detected {"process": "storagenode"}
Interestingly, I have just the one node running 1.79.4 on Debian 11 which has no information in the logs regarding fastopen. It’s like it’s not even checking to see if the ability is there or not.
Any idea why this might be?
I haven’t changed the log level on any of my machines, it’s the same for all of them…
The tcp fastopen log entries are visible only in log level info mode. Not in error or warn mode. See the above discussions.
The default is Info, I think. I just double checked and all my nodes are set to the same.
Odd indeed!
I have the same for my Synology:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: write sysctl key net.ipv4.tcp_fastopen: open /proc/sys/net/ipv4/tcp_fastopen: no such file or directory: unknown.
With my limited linux skills I tried to search for that file in that path, and is not there. Maybe the parameter needs adjusted with correct path, but I can’t find my way in cli. Maybe @BrightSilence could take a shot at it.
and you need to use --sysctl net.ipv4.tcp_fastopen=3
option in the docker run
command.
But I guess you may be also need to add --privileged
option…
So, I’ll try this way?
docker run -d --restart unless-stopped --stop-timeout 300 \
--privileged \
--sysctl net.ipv4.tcp_fastopen=3 \
-p 28967:28967/tcp \
-p 28967:28967/udp \
-p 14002:14002 \
...
@Alexey
I got the same error… no such file exists.
Maybe I have to create the tcp_fastopen file inside the container? How could I do that?
BTW, the kernel has the tcp_fastopen file with value 3, at this path:
/proc/sys/net/ipv4/tcp_fastopen
I run the ls command for kernel (root) and for docker and I get the files below;
for kernel:
# ls /proc/sys/net/ipv4
conf ipfrag_time tcp_frto tcp_reordering
fwmark_reflect ip_local_port_range tcp_fwmark_accept tcp_retrans_collapse
icmp_echo_ignore_all ip_local_reserved_ports tcp_invalid_ratelimit tcp_retries1
icmp_echo_ignore_broadcasts ip_nonlocal_bind tcp_keepalive_intvl tcp_retries2
icmp_errors_use_inbound_ifaddr ip_no_pmtu_disc tcp_keepalive_probes tcp_rfc1337
icmp_ignore_bogus_error_responses neigh tcp_keepalive_time tcp_rmem
icmp_msgs_burst netfilter tcp_limit_output_bytes tcp_sack
icmp_msgs_per_sec ping_group_range tcp_low_latency tcp_slow_start_after_idle
icmp_ratelimit route tcp_max_orphans tcp_stdurg
icmp_ratemask tcp_abort_on_overflow tcp_max_reordering tcp_synack_retries
igmp_link_local_mcast_reports tcp_adv_win_scale tcp_max_syn_backlog tcp_syncookies
igmp_max_memberships tcp_allowed_congestion_control tcp_max_tw_buckets tcp_syn_retries
igmp_max_msf tcp_app_win tcp_mem tcp_thin_dupack
igmp_qrv tcp_autocorking tcp_min_rtt_wlen tcp_thin_linear_timeouts
inet_peer_maxttl tcp_available_congestion_control tcp_min_snd_mss tcp_timestamps
inet_peer_minttl tcp_base_mss tcp_min_tso_segs tcp_tso_win_divisor
inet_peer_threshold tcp_challenge_ack_limit tcp_moderate_rcvbuf tcp_tw_recycle
ip_default_ttl tcp_congestion_control tcp_mtu_probing tcp_tw_reuse
ip_dynaddr tcp_dsack tcp_no_metrics_save tcp_window_scaling
ip_early_demux tcp_early_retrans tcp_notsent_lowat tcp_wmem
ip_forward tcp_ecn tcp_orphan_retries tcp_workaround_signed_windows
ip_forward_use_pmtu tcp_ecn_fallback tcp_pacing_ca_ratio udp_mem
ipfrag_high_thresh tcp_fack tcp_pacing_ss_ratio udp_rmem_min
ipfrag_low_thresh tcp_fastopen tcp_probe_interval udp_wmem_min
ipfrag_max_dist tcp_fastopen_key tcp_probe_threshold vs
ipfrag_secret_interval tcp_fin_timeout tcp_recovery xfrm4_gc_thresh
and for docker:
# docker exec -it storagenode ls /proc/sys/net/ipv4
conf ip_local_port_range tcp_base_mss
fwmark_reflect ip_local_reserved_ports tcp_ecn
icmp_echo_ignore_all ip_no_pmtu_disc tcp_ecn_fallback
icmp_echo_ignore_broadcasts ip_nonlocal_bind tcp_fwmark_accept
icmp_errors_use_inbound_ifaddr ipfrag_high_thresh tcp_min_snd_mss
icmp_ignore_bogus_error_responses ipfrag_low_thresh tcp_mtu_probing
icmp_ratelimit ipfrag_time tcp_probe_interval
icmp_ratemask neigh tcp_probe_threshold
igmp_link_local_mcast_reports netfilter vs
ip_forward ping_group_range xfrm4_gc_thresh
ip_forward_use_pmtu route
So it seems that inside the container there is no tcp_fastopen file. What can be done? How can I create the file each time I stop, rm and restart the node? Is it necesary?
no. It look like it’s disabled for used docker in Synology.
I got the same error on my NAS when I tried enabling it. It’s based on Debian Jessie so I just assumed it was because there is no support in the kernel for TCP fastopen…
I did not use --privileged for docker command (did the sysctl setting) and storagenode reports it have detected the fastopen. Debian Bullseye on kernel 5.10.
Wondering if I have to enable it on VPN proxy server? I did, but is it needed?
I believe it also should be enabled for VPN tunnel, if used VPN allows to do so.
Do I have to change the config in OpenVPN for that? Or is it default enabled? If yes, is that a client or server setting?
I do not know, for honestly.
I tried quickly find an answer, but can only found the enabling it system-wide with sysctl
.
So, you would need to experiment.
I enabled it with systemwide. The logs are also saying it’s enabled. Is there a way to check if it’s working?
I get the part with impruving transfers for clients, with tcp-fastopen, but while not all the nodes can support it, this could centralise the traffic to an island of nodes that can support it and enabled it.
Yea and no. Nodes that support fast open will be winning slightly more races. But so will those closer to the customer.
Ultimately, making nodes faster is never a problem