Please enable TCP fastopen on your storage nodes

I have the same problem, I asked the question but no one answered me

Do you have a working script to fetch such data?

Thanks and kind regards,

How exactly did you do the switching? I also got everything setted up, with Ubuntu-Server and Debian-Server and not receiving any TCPFast-Packages yet.

1 Like

Change this:

docker run -d --restart unless-stopped --stop-timeout 300 \
    -p 28967:28967/tcp \
    -p 28967:28967/udp \
    -p 127.0.0.1:14002:14002 \
    ...
    --name storagenode storjlabs/storagenode:latest

to this:

docker run -d --restart unless-stopped --stop-timeout 300 \
--network host \
...
--name storagenode storjlabs/storagenode:latest \
--server.address=":28967" \
--server.private-address="127.0.0.1:7777" \ 
--console.address="127.0.0.1:14002"

If you run multiple nodes on that server you have to update the ports accordingly.

3 Likes

I don’t know what is server.private-address used for. It just needed a port to work. Does any one know?

It is used for the CLI dashboard and other commands from this list:

storagenode --help

So, if you switched your container network to use the host network and you have more than a one node, you must provide only unique ports, see How to add an additional drive? - Storj Docs

If you changed this internal port, to issue any storagenode’s command you need to provide this new port as an argument, i.e. --server.private-address=:7779,

docker exec -it storagenode ./storagenode exit-status --server.private-address=:7779 --config-dir config --identity-dir identity

or

docker exec -it storagenode ./dashboard.sh --server.private-address=:7779

Has anyone managed to enable it on TrueNAS Scale ‘Cobia’?
Despite enabling it via sysctl, the container does not detect tcp fast open active.

For TrueNAS Scale it’s likely not possible, unless IX systems would implement to configure the underlying kubernetes to allow sysctl unsafe flags: Using sysctls in a Kubernetes Cluster | Kubernetes, then they also need to update the chart to include set of sysctl for net.ipv4.tcp_fastopen=3 in the pod’s context.

The other way would be to specify a host network for the pod, it can be done in the chart or manually on runtime in the Deployment resource, the first possibility should be implemented by IX system, the second method you may try with usage of kubectl, but it will not be permanent. You also need to enable TCP Fastopen in the system configuration.

So for me the easiest solution would be to stop the app, move a content of the data and identity volumes to your own custom paths, remove the app and use pure docker to run your node.
The migration is needed, because if you remove the app, it will remove volumes, thus the node should be stopped when you would move data and identity to your own paths.

1 Like

Fortunately, iX recommends from the beginning to create a dedicated dataset :slight_smile:

I have my own /mnt/tank/storj-node.
(My node is running on a home server, with an FTTH connection. I win almost all races already)

1 Like

I got it working on ubuntu 22. You do not need

–network host

you just need

sysctl -w net.ipv4.tcp_fastopen=3

and this in your docker run

–sysctl net.ipv4.tcp_fastopen=3

To verify that it is working, you can run it directly inside the container

docker exec -it storagenode bash
apt update
apt install net-tools -y
netstat -s | grep -i tcpfast

This will not show up on your main OS netstat. For that you can use --network host

3 Likes

This will not show up on your main OS netstat.

how to check if it is working correctly?

docker exec -it storagenode bash
apt update
apt install net-tools -y
netstat -s | grep -i tcpfast

From my post

2 Likes

Has anyone managed to quantify the traffic difference with and without tcpfast?

Hard to say if most of us have it turned on. My stats look roughly the same

They say tcp fast is not used on large scale by clients, also quick is not used. I wonder if all these addons have any point if the clients don’t use them. They just add third party snipets that can be hacked and used in supply chain attacks.

I am sometimes a bit on a critical side, however, as for this, despite a significant increase of an attacking surface, I believe it is a move in a right direction. Maybe not all the clients are using the new features (immediately) but there might be a new one especially choosing Storj thanks to those addons. I agree with you, that for node operators it is a significant increase in exposure to risk, particularly if following the advised route and setting containers on a host network.

Why “network host” is more risky than “network bridge” mode?

I am by far not considering myself a security expert. How I understand it is that if you are running on a container bridge you are isolating your application to that interface which is one of the selling points of containers; in contrary, running on a host, every other port opened on your host is also available to that application. I am not paranoid, however, isnt it significantly more risky, what do you think?

Because in case of usage the host network there is no network isolation anymore. If not all ports are closed on your host, or you exposing some internal ports, they can be used for the attack.

When you use a bridge network, it’s similar to using NAT on your router - your local network become isolated from the internet. The host network is the same as DMZ on your router.
So if you do not use firewalls, your host can be compromised, or at least someone could try to harm your node (in case if you forwarded internal ports, or use DMZ/have working IPv6 without a firewall).