Upload Failed : 67440 16.10%
Upload Canceled : 560 00.13%
Upload Successful : 350785 83.76%
I agree but hopefully it will be fixed soon.
Upload Failed : 67440 16.10%
Upload Canceled : 560 00.13%
Upload Successful : 350785 83.76%
I agree but hopefully it will be fixed soon.
I still have 95/96% on my windows nodes (ftth 1gb)
Same here on Synology, where it also doesn’t work. It is a little lower than before, but not by much.
I’m seeing a lot of Fails on my docker node. Any idea’s why this could be the case?
TCPFastOpenActiveFail: 9
TCPFastOpenPassive: 10893
TCPFastOpenPassiveFail: 14149
TCPFastOpenCookieReqd: 27
Error: unknown flag: --sysctl
Usage:
storagenode run [flags]
Testing without --priviledged. So far it is working as long as you:
--sysctl net.ipv4.tcp_fastopen=3
parameter making sure you put it before storjlabs/storagenode:latest
image name,
of course it should be before the image name, since this is a docker flag, not storagenode’s one.
Last week, I shutdown all my VPS (except for one), and moved those nodes back into our DC.
Now all 105 nodes are only sharing 30 x /24 subnets in our DC, instead of 105 x /24 subnets in a mixed environment with subnets from our DC and VPS.
I’ve then pulled some TCPFastOpen stats from the hardware serving the 105 nodes :
2023-07-18T00:00:01 UTC | |
---|---|
TCPFastOpenActiveFail: | 699.394 |
TCPFastOpenPassive: | 47.086.144 |
TCPFastOpenPassiveFail: | 201.929 |
TCPFastOpenListenOverflow: | 10.969 |
TCPFastOpenCookieReqd: | 89.686 |
2023-07-19T00:00:01 UTC | |
---|---|
TCPFastOpenActiveFail: | 700.360 |
TCPFastOpenPassive: | 49.989.756 |
TCPFastOpenPassiveFail: | 201.959 |
TCPFastOpenListenOverflow: | 10.969 |
TCPFastOpenCookieReqd: | 90.509 |
24 hours stats | |
---|---|
TCPFastOpenActiveFail: | 966 |
TCPFastOpenPassive: | 2.903.612 |
TCPFastOpenPassiveFail: | 30 |
TCPFastOpenListenOverflow: | 0 |
TCPFastOpenCookieReqd: | 823 |
On another server with 501 nodes, that are using just one VPS from Ionos, and about 50% of those nodes are vetted, the stats are :
2023-07-18T00:00:01 UTC | |
---|---|
TCPFastOpenActiveFail: | 10.247 |
TCPFastOpenPassive: | 10.811.641 |
TCPFastOpenPassiveFail: | 222.281 |
TCPFastOpenCookieReqd: | 6.613 |
2023-07-19T00:00:01 UTC | |
---|---|
TCPFastOpenActiveFail: | 10.297 |
TCPFastOpenPassive: | 11.094.121 |
TCPFastOpenPassiveFail: | 222.281 |
TCPFastOpenCookieReqd: | 6.722 |
24 hours stats | |
---|---|
TCPFastOpenActiveFail: | 50 |
TCPFastOpenPassive: | 282.480 |
TCPFastOpenPassiveFail: | 0 |
TCPFastOpenCookieReqd: | 109 |
Th3Van.dk
Can someone explain the meaning of these stats? What each parameter means and how they should look if all is working perfectly?
I have value of net.ipv4.tcp_fastopen=3 on host, in container, and success for FastOpen capability on the storagenode logs. However I do not have those 4 values in netstat. Is there something else I missed?
same here with ubuntu 22
Do you use the docker network? I have fast open working on Synology. I use the host network(Synology NAS). Which is the only thing different about my setup than all the instructions online for getting it going.
Worth a try, but I’d have to look into how to run multiple nodes that way without port conflicts.
yea i only run a single node, but in the configuration file you should be able to pick a different port.
@BrightSilence
Maybe stupid questions, but I just realized – would not QUIC be faster than TCP even with Fastopen? I.e. why do we bother with optimizing TCP path when UDP is already implemented and supported?
Pretty sure the gang had some dabblings with QUIC and then dropped it for some reason…
So do you say to change the bridge mode to host mode in Docker, on Syno? Can you put here a step by step guide how to modify the configs? My head is spining from so many ports and dockers .
Build the NAS following instructions for enabling fast TCP
When you build your node container, change the default port to a unique port, and use HOST network.
Open ports from host network router(vlans are also possible(Synology NAS typically have more than 1 gigabit interface and other lan IPS can be assigned and used))
Launch your docker containers with fast TCP flags, and enable fast TCP in config.yaml (I get an error on startup saying the config value keys don’t exist) but fast TCP is verified working.
That’s the best I can do, I don’t have diffinitve proof that this will work for everyone, but on my 1621+ it works.
It was interesting to read all these details: that’s why I’m running StorJ - to get hands-on experience on these advanced topics.
So, based on the info above, and based on @snorkel messages in another thread,
and based on UDP Buffer Sizes · quic-go/quic-go Wiki · GitHub,
here are my commands, which run StorJ nodes on my Synology DS1621xs+ (version 7.2) with 64GB RAM to enable TFO:
run as root:
echo "net.ipv4.tcp_fastopen=3" >> /etc/sysctl.conf
echo "net.core.rmem_max=2500000" >> /etc/sysctl.conf
sysctl -w net.core.rmem_max=2500000
sysctl -w net.ipv4.tcp_fastopen=3
docker run -d --restart unless-stopped --stop-timeout 300 \
--network host \
-e WALLET="0x0*****" \
-e EMAIL="slav***" \
-e ADDRESS="****.com:30002" \
-e STORAGE="12TB" \
--mount type=bind,source="/volume4/storj2/Identity/storagenode",destination=/app/identity \
--mount type=bind,source="/volume4/storj2/data/",destination=/app/config \
--name storj2 \
storjlabs/storagenode \
--log.level info \
--server.address=":30002" \
--console.address=":30102" \
--server.private-address="127.0.0.1:141"
And I see in the log:
2023-08-01T00:02:17Z INFO server existing kernel support for server-side tcp fast open detected {“process”: “storagenode”}
and after 20mins of the node uptime, here is output of netstat:
netstat -s | grep TCPFast
TCPFastOpenPassive: 300
TCPFastOpenCookieReqd: 261684
The Synology itself is running for a few days, - that’s why TCPFastOpenCookieReqd
value is much larger.
I still not sure, how TOF settings will survive restart or OS upgrade. Will see.
On my Synology, the default value of net.core.rmem_max
is 212992
. I’m still not sure, what value is best. I multiplied it by 4. I read, that too big value can slow things down, too.
I’m not using server.private-address
, but because I use HOST network, I have to specify that port on each node with unique value to avoid ports conflict.
Reading about Docker HOST network, people report that it’s about 20% faster than BRIDGE.
is used for CLI dashboard and some commands like calling a graceful exit on the correct node.
This is interesting, that Synology allows to use ports lower than 1024.