Wireguard + VPS: need help for QUIC

Very good advice, indeed. And that’s the first thing I did before even creating any cloud service. Best practice for all Cloud platforms.
But there should be plenty of room: Oracles offers 10TB of free outbound data transfer each month.

Ou, I thought you use Oracle’s storage. My bad.
But even so, more than half the nodes are routed through VPS. So half the Storj network is dependent of those 2-3-4 VPS services.

1 Like

They use those VPS but not depend on them. If that route disappears, it’s trivial to fire up another in a matter of minutes.

Look, storj satellites run on VPS. If the Google has outage what happens?

It’s not different that half the Internet traffic goes though the under sea cable. If a disgruntled whale chomps on it — there will be some inconvenience but nothing alternative routes won’t solve

1 Like

Its a business, not a project.
As SNO we are constantly referred to as Byzantine. Using a VPN, just returns that sentiment.

3 Likes

I don’t want to pile on, but I can’t not comment on this: this is demonstrably not true. I see no difference in performance between nodes connected “directly” or via Oracle datacenter. I can’t directly compare the same node — if it had direct access I would not need Oracle to begin with…. What I do know is that nodes started areojd the same time now store about the same amount of data and earn about the same amount of money.

I can also see how routing through Oracle can be faster due to better routing from Oracle datacenter vs your ISP, or maybe even ISP scheninigans/traffic shaping. They cannot shape VPN traffic and not enrage their customers.

I can’t see routing through a VPS winning more races as a direct connection. Mybe I don’t see the whole picture, but:

  1. Storage > ISP > VPS > Client > VPS > ISP > storage.
  2. Storage > ISP > Client > ISP > storage.

I don’t see how 1 better than 2.
(ISP meaning all the switches and routers of your ISP that connect you to the internet).

This might be counterintuitive, yes. The thing is, if the VPS is in general in a data center with good peering, then the path 1 might actually be faster than a direct connection between two consumer ISPs, even if nominally it’s longer. And the reason is: links are not equal. Consumer ISPs mostly care about good peering to large networks: they don’t want to pay for peering to just another consumer ISP, and they will optimize their routing so that latency to well-known data centers is low. Because note: what’s the biggest use of consumer-to-consumer links in modern wide-area networks? Bandwidth-hogging P2P networks with data of questionable legality. If they can trade off latency on these links for latency to mainstream media, streaming, gaming, etc., this results in better experience for regular users. And so deprioritizing traffic is a thing.

1 Like

By the way @arrogantrabbit , you mentioned earlier that it’s not mandatory to use network_host in Docker for wireguard tunnel to work with Storj. Can you explain a little bit how to do that properly?

Does it require specific routing on the Docker host?

Now that everything is working well, I’d like to try that.

I have some issues with QUIC.

@arrogantrabbit , I know you said that errors with QUIC should be ignored but I don’t want to lose potential traffic.

So what changed since the last time everything worked perfectly?
I tried to put Wireguard client in a Docker container. Why? Because I host other Docker containers on my host and don’t need their traffic to go through the Wireguard tunnel.

So here is what I tried:

  1. Stop the wireguard client on the host machine
  2. Update docker-compose.yaml.
    New version:
services:

  storj5:
    build: .
    container_name: storj5
    restart: unless-stopped
    stop_grace_period: 300s
    user: "${UID}:${GID}"
    network_mode: "service:wireguard"
    environment:
      WALLET: "${WALLET}"
      EMAIL: "${EMAIL}"
      ADDRESS: "<Wireguard server public IP>:28967"
      STORAGE: "500GB"
    volumes:
      - /mnt/data/storj5_identity:/app/identity
      - /mnt/data/storj5_data:/app/config

  wireguard:
    image: linuxserver/wireguard:latest
    container_name: wireguard-client
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    ports:  
      - "14002:14002"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Montreal
    volumes:
      - ./config:/config
      - /lib/modules:/lib/modules:ro
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    restart: unless-stopped
Previous version of docker-compose yaml:
services:

  storj5:
    build: .
    container_name: storj5
    restart: unless-stopped
    stop_grace_period: 300s
    user: "${UID}:${GID}"
    network_mode: "host"
    environment:
      WALLET: "${WALLET}"
      EMAIL: "${EMAIL}"
      ADDRESS: "<Wireguard server public IP>:28967"
      STORAGE: "500GB"
    volumes:
      - /mnt/data/storj5_identity:/app/identity
      - /mnt/data/storj5_data:/app/config

It works, as I see traffic in the logs. And I confirm traffic goes through the Wireguard tunnel since ping ipinfo.io from the Storj container returns the Wireguard server public IP.

But since I’ve tried this new setup, I have a QUIC error (the Dashboard also mentions “Misconfigured”):
2026-01-15T04:24:58Z WARN contact:service Your node is still considered to be online but encountered an error. {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Error": "contact: failed to ping storage node using QUIC, your node indicated error code: 0, rpc: quic: timeout: no recent network activity"}

What is really weird is that when I put back the previous setup, I still have the QUIC issue now :confused:
So I can’t even go back to a fully operational setup.


Bonus question: How can I access the Dashboard when using the Wireguard container? I’ve added port redirection on the Wireguard container as we can see above but it doesn’t work.

There is nothing to lose, see my first comment in this thread.

QUIC connectivity is unstable. Mitigation is described in the same comment. Did you try it?

This is where Linux wastes your time. On FreeBSD and jails it just works. On FreeBSD jails use separate FIBs, so the WireGuard default route applies only to tunnel-sourced traffic while LAN routes remain symmetric. Linux containers have a single routing table per namespace, so AllowedIPs=0.0.0.0/0 captures reply traffic to the LAN and breaks symmetry. That’s why it does not work. To fix, you need to manually add an explicit policy routing with a separate routing table for WireGuard in the container.

You can lookup ip-rule and ip-route and also read this Routing & Network Namespaces - WireGuard

Side note. You absolutely do not need to send all traffic through the tunnel (thus avoiding this lan routing issue altogether). Instead of 0.0.0.0/0 you would use servers wireguard address/32 there. But you would need to find an alternative way to use DDNS (e.g. from yet another container connected full tunnel to the same server), or use static external IP.

They should be your node ports, not the dashboard. Dashboard should remain local.

Yes. Unfortunately, still the same warnings in the logs and still the “Misconfigured” mention in the dashboard.

Thanks.
I don’t understand. What do you recommend in order to access the Dashboard?

http://your-nodes-local-ip:14002/

1 Like

Just noticed this:

If you still have this — remove it. Especially on Oracle.

Only undo the rules you created yourself in PreUp.

Described here: Wireguard + VPS: need help for QUIC - #50 by arrogantrabbit

Your ports: in compose file on wireguard containers is correct. But you also need to configure policy routing or not use full tunnel

@Roberto

Thanks.
That’s what I did in my docker-compose.yaml.
Since the Storj node’s traffic goes through wireguard container, I did it on this container.


@arrogantrabbit

Thanks.
Yes, I’ve already removed it. I just put what you described in your tutorial (very helpful!).


Thanks. At this point, I will take the “static IP” option you mention (I’m using ephemeral IP with OCI, but it should live for the lifetime of the instance).
So if I understand you correctly, I would just need to use this config file for the Wireguard client container (I commented the old version):

[Interface]
PrivateKey = <Wireguard client private key>
Address = 10.10.0.2
        
[Peer]
PublicKey = <Wireguard server public key>
#AllowedIPs = 0.0.0.0/0
AllowedIPs = 10.10.0.1/32
Endpoint = 148.116.81.49:51820
PersistentKeepalive = 25

Is that right?


But in this more recent thread, @Alexey also mentions that this should have a minor impact:

So, I don’t understand, why do you want to make your private information from the dashboard to be public?
If you do want to access your dashboard in the local network, then you don’t need to route it via the tunnel and can use the usual http://local-IP:14002 from any device in your network.
If you want to access it outside of your local network then you can use this method:

Or

on your other devices.

That’s not what I want. I want to access my dashboard from my local network.

For sure, it works if I don’t use Wireguard tunnel. But with the setup I described previously, all traffic for the Storj container goes through the tunnel, so I can’t access the Dashboard from my local network:

services:

  storj5:
    build: .
    container_name: storj5
    restart: unless-stopped
    stop_grace_period: 300s
    user: "${UID}:${GID}"
    network_mode: "service:wireguard"
    environment:
      WALLET: "${WALLET}"
      EMAIL: "${EMAIL}"
      ADDRESS: "<Wireguard server public IP>:28967"
      STORAGE: "500GB"
    volumes:
      - /mnt/data/storj5_identity:/app/identity
      - /mnt/data/storj5_data:/app/config

  wireguard:
    image: linuxserver/wireguard:latest
    container_name: wireguard-client
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    ports:  
      - "14002:14002"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Montreal
    volumes:
      - ./config:/config
      - /lib/modules:/lib/modules:ro
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    restart: unless-stopped

What I am asking is: What should I do to make the Storj’s traffic goes through the tunnel, except for the Dashboard traffic?
As you can see in my docker-compose file above, the storj5 (which is at home) container’s network is configured to use the wireguard’s container network (network_mode: "service:wireguard"). But when I do so, I can’t access the Dashboard GUI anymore (from my local network).
When I expose the storj5 ports with the following parameters, I get an error from Docker:

    ports:  
      - "14002:14002"
 ✘ Container storj5   Error response from daemon: conflicting options: port publishing and the container type network mode                                                                                                               0.0s 
Error response from daemon: conflicting options: port publishing and the container type network mode

That’s why I tried to set this parameter to the wireguard container (since storj5 container uses wireguard container network). But it doesn’t work either.
When I do so, Docker is fine but the Dashboard is not accessible from my local network.

Hence my question: how can I fix this? i.e. How can I access the storj5’s dashboard while its traffic goes through the wireguard container?

I hope it’s clearer.

Thanks for your help!


ps: in case it’s not clear: the wireguard container in my docker-compose is NOT the Wireguard server. It is the wireguard client, which connects to the Wireguard server (hosted on Oracle Cloud) in order to create the tunnel

pps: I want to put the wireguard client in a docker container instead of my host machine because I host other containers on this machine and I don’t need/want them to go through the tunnel

Of course, with a network: host port publishing is useless, there is no NAT. With a host network every single container port will be a host port.

Since you route all traffic to the tunnel, 14002 will not be available by a local IP and port, it’s published on the public IP as well.

In theory yes, this port is closed for my instance on Oracle Cloud, so not accessible on the public IP (which is good).

Basically, my need is to use the wireguard tunnel for the Storj container only (not for the other containers on my machine), while still being able to access the Dashboard. I suppose there is a way to do that.

I’ve just discovered the “multinode dashbaord”. Would it work with it?