Very good advice, indeed. And that’s the first thing I did before even creating any cloud service. Best practice for all Cloud platforms.
But there should be plenty of room: Oracles offers 10TB of free outbound data transfer each month.
Ou, I thought you use Oracle’s storage. My bad.
But even so, more than half the nodes are routed through VPS. So half the Storj network is dependent of those 2-3-4 VPS services.
They use those VPS but not depend on them. If that route disappears, it’s trivial to fire up another in a matter of minutes.
Look, storj satellites run on VPS. If the Google has outage what happens?
It’s not different that half the Internet traffic goes though the under sea cable. If a disgruntled whale chomps on it — there will be some inconvenience but nothing alternative routes won’t solve
I don’t want to pile on, but I can’t not comment on this: this is demonstrably not true. I see no difference in performance between nodes connected “directly” or via Oracle datacenter. I can’t directly compare the same node — if it had direct access I would not need Oracle to begin with…. What I do know is that nodes started areojd the same time now store about the same amount of data and earn about the same amount of money.
I can also see how routing through Oracle can be faster due to better routing from Oracle datacenter vs your ISP, or maybe even ISP scheninigans/traffic shaping. They cannot shape VPN traffic and not enrage their customers.
This might be counterintuitive, yes. The thing is, if the VPS is in general in a data center with good peering, then the path 1 might actually be faster than a direct connection between two consumer ISPs, even if nominally it’s longer. And the reason is: links are not equal. Consumer ISPs mostly care about good peering to large networks: they don’t want to pay for peering to just another consumer ISP, and they will optimize their routing so that latency to well-known data centers is low. Because note: what’s the biggest use of consumer-to-consumer links in modern wide-area networks? Bandwidth-hogging P2P networks with data of questionable legality. If they can trade off latency on these links for latency to mainstream media, streaming, gaming, etc., this results in better experience for regular users. And so deprioritizing traffic is a thing.
By the way @arrogantrabbit , you mentioned earlier that it’s not mandatory to use network_host in Docker for wireguard tunnel to work with Storj. Can you explain a little bit how to do that properly?
Does it require specific routing on the Docker host?
Now that everything is working well, I’d like to try that.
@arrogantrabbit , I know you said that errors with QUIC should be ignored but I don’t want to lose potential traffic.
So what changed since the last time everything worked perfectly?
I tried to put Wireguard client in a Docker container. Why? Because I host other Docker containers on my host and don’t need their traffic to go through the Wireguard tunnel.
It works, as I see traffic in the logs. And I confirm traffic goes through the Wireguard tunnel since ping ipinfo.io from the Storj container returns the Wireguard server public IP.
But since I’ve tried this new setup, I have a QUIC error (the Dashboard also mentions “Misconfigured”): 2026-01-15T04:24:58Z WARN contact:service Your node is still considered to be online but encountered an error. {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}
What is really weird is that when I put back the previous setup, I still have the QUIC issue now
So I can’t even go back to a fully operational setup.
Bonus question: How can I access the Dashboard when using the Wireguard container? I’ve added port redirection on the Wireguard container as we can see above but it doesn’t work.
There is nothing to lose, see my first comment in this thread.
QUIC connectivity is unstable. Mitigation is described in the same comment. Did you try it?
This is where Linux wastes your time. On FreeBSD and jails it just works. On FreeBSD jails use separate FIBs, so the WireGuard default route applies only to tunnel-sourced traffic while LAN routes remain symmetric. Linux containers have a single routing table per namespace, so AllowedIPs=0.0.0.0/0 captures reply traffic to the LAN and breaks symmetry. That’s why it does not work. To fix, you need to manually add an explicit policy routing with a separate routing table for WireGuard in the container.
Side note. You absolutely do not need to send all traffic through the tunnel (thus avoiding this lan routing issue altogether). Instead of 0.0.0.0/0 you would use servers wireguard address/32 there. But you would need to find an alternative way to use DDNS (e.g. from yet another container connected full tunnel to the same server), or use static external IP.
Thanks.
Yes, I’ve already removed it. I just put what you described in your tutorial (very helpful!).
Thanks. At this point, I will take the “static IP” option you mention (I’m using ephemeral IP with OCI, but it should live for the lifetime of the instance).
So if I understand you correctly, I would just need to use this config file for the Wireguard client container (I commented the old version):
So, I don’t understand, why do you want to make your private information from the dashboard to be public?
If you do want to access your dashboard in the local network, then you don’t need to route it via the tunnel and can use the usual http://local-IP:14002 from any device in your network.
If you want to access it outside of your local network then you can use this method:
That’s not what I want. I want to access my dashboard from my local network.
For sure, it works if I don’t use Wireguard tunnel. But with the setup I described previously, all traffic for the Storj container goes through the tunnel, so I can’t access the Dashboard from my local network:
What I am asking is: What should I do to make the Storj’s traffic goes through the tunnel, except for the Dashboard traffic?
As you can see in my docker-compose file above, the storj5 (which is at home) container’s network is configured to use the wireguard’s container network (network_mode: "service:wireguard"). But when I do so, I can’t access the Dashboard GUI anymore (from my local network).
When I expose the storj5 ports with the following parameters, I get an error from Docker:
ports:
- "14002:14002"
✘ Container storj5 Error response from daemon: conflicting options: port publishing and the container type network mode 0.0s
Error response from daemon: conflicting options: port publishing and the container type network mode
That’s why I tried to set this parameter to the wireguard container (since storj5 container uses wireguard container network). But it doesn’t work either.
When I do so, Docker is fine but the Dashboard is not accessible from my local network.
Hence my question: how can I fix this? i.e. How can I access the storj5’s dashboard while its traffic goes through the wireguard container?
I hope it’s clearer.
Thanks for your help!
ps: in case it’s not clear: the wireguard container in my docker-compose is NOT the Wireguard server. It is the wireguard client, which connects to the Wireguard server (hosted on Oracle Cloud) in order to create the tunnel
pps: I want to put the wireguard client in a docker container instead of my host machine because I host other containers on this machine and I don’t need/want them to go through the tunnel
In theory yes, this port is closed for my instance on Oracle Cloud, so not accessible on the public IP (which is good).
Basically, my need is to use the wireguard tunnel for the Storj container only (not for the other containers on my machine), while still being able to access the Dashboard. I suppose there is a way to do that.
I’ve just discovered the “multinode dashbaord”. Would it work with it?