Wireguard + VPS: need help for QUIC

Good find.

You may need to use the network_mode: "host" or specify the wg0 address in the docker ports section.

Thanks for your answer.
Before continuing to play with iptables and other networking config, can you tell me how can I “snapshot” the current routing rules and restore them in case I mess up?

Plot thickens :slight_smile:

when you are making changes with iptables, don’t make them persistent. Then rebooting the machine will restore the configuration. Make them persistent once everything works.

Where did it come from? This disables routing table update by wireguard.

Right, this would be expected because routing table wasn’t being modified, so the rest is irrelevant.

That question would be applicable if you used firewalld. Disregard.

So, since you use docker, it introduces another layer of worms. You can consider switching to podman to drastically reduce resource usage and complexity.

Good.

On your phone disconnect from WiFi, connect to Cellular, and access the node address:port from the browser at http.

But I think we almost know what the problem is already: you have another subnet made by docker, and wireguard had routing disabled.

So, this is the plan:

  1. From now on, stop the node, and don’t start it until everything works. Instead, start a local http server on the same port:
    docker run --rm --network host python:3-alpine \
        sh -c "echo 'It works!' > index.html && python -m http.server 28967"
    
  2. as @RecklessD indicated, run container in the host networking mode (it’s not required, it totally can work with port forwarding, but host mode is simpler, and we want to get this thing working with fewest possible complications and moving parts)
  3. Remove everything from your wireguard config files that is not present in my tutorial. MTU, DNS, table, etc. Everything. Regardless of what the tool generated for you. Leave only keys and addresses.
  4. Start the server and client, and make sure they can ping each other on their respective Wireguard interfaces.
  5. Stop the server and client, add iptables PreUp/PostDown rules.
  6. Start the server
  7. Start the client
  8. Make sure pings still work.
  9. Make sure client AllowedIPs are 0.0.0.0/0. This will send all traffic from your raspberry pi via the tunnel. Make sure it does (curl ipinfo.io, shall show server’s IP)
  10. At this point, with http server running on storj port:
  11. curl the http server from the client, ensure it works
  12. curl form the wirguard server, ensure it works.
  13. access the server from your phone’s data connection, not wifi, using the external IP of the wireguard server and storj port in the browser, via HTTP. If that is works – we are done. stop the http server, and start node (in host mode)

If some step does not work – provide output from sudo iptables-save (it prints config to console) on the client and the server with wireguard running on both.

Thank you.

I’m going to follow your plan but before, can you just confirm a little detail (I’m a little bit lost after reading all suggestions and your tutorial)?

  • What “AllowedIPs” should I set in the wireguard client and wireguard server configs?
    Your tutorial, suggest to use AllowedIPs = 10.0.60.2/32 for the server and AllowedIPs = 10.0.60.1/32. Though, in this thread, I saw you or someone else suggested 0.0.0.0/0.

Thanks!

AllowedIPs defines which destination prefixes are routed to a given WireGuard peer and how peers are selected for outbound packets.

WireGuard installs prefixes listed in AllowedIPs as routes via that peer, and for each outbound packet, WireGuard selects the peer whose AllowedIPs most specifically matches the destination.

So, if you have

AllowedIPs = 10.0.60.x/32

Then:

  • Only traffic destined for the peer’s tunnel address is routed into WireGuard.
  • All other traffic follows the host’s normal routing table.
  • Full-tunnel routing is not in effect, and thereforem asquerading is unnecessary.
AllowedIPs = 0.0.0.0/0
  • All IPv4 traffic is routed into WireGuard, aka full tunnel.
  • Masquerading is required on the server.
  • Client traffic cannot bypass the tunnel.

From my blog post:

Note that we are setting AllowedIPs to 0.0.0.0/0. This is to ensure that all traffic originating from the node is routed through the VPN, particularly inadyn traffic that would update DDNS if you were using one. If you do not want this behavior, set only the server’s address in the AllowedIPs list. In this case, there skip enabling masquerading in the steps below.

The goal is to route all node-originated traffic through the VPS, so this full-tunnel routing.

The split-tunnel wireguard setup is discussed in the beginning to ensure tunnel works, and then we switch to full tunnel.

For your setup you want

  • Client: AllowedIPs = 0.0.0.0/0 (send all traffic to the tunnel)
  • Server: AllowedIPs = <client tunnel IP>/32 (send traffic only destined to the client other end of the tunnel through the tunnel)
  • Server: masquerading enabled (otherwise replies won’t come back to the client)
1 Like

Ok, so I followed scrupulously what you suggested and EVERYTHING worked. Until I start the real Storj container.

My issues are exactly like in this thread: Node System Clock out of sync .. but time is OK? - #22 by arrogantrabbit

This makes me think that my wireguard config is OK. But I really don’t know what’s happening.

Below are the details.


Setup

Wireguard server config:

Wireguard server config
[Interface]
PrivateKey = XXXX
Address = 10.10.0.1
ListenPort = 51820

# Allow WireGuard's own traffic to reach the server.
PreUp = iptables -I INPUT -p udp --dport 51820 -j ACCEPT
PostDown = iptables -D INPUT -p udp --dport 51820 -j ACCEPT

# Allow incoming Storj connections on the public interface BEFORE they are forwarded.
PreUp = iptables -I INPUT -p tcp --dport 28967 -j ACCEPT
PostDown = iptables -D INPUT -p tcp --dport 28967 -j ACCEPT
PreUp = iptables -I INPUT -p udp --dport 28967 -j ACCEPT
PostDown = iptables -D INPUT -p udp --dport 28967 -j ACCEPT

# Port forward incoming Storj traffic to the VPN client.
PreUp = iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 28967 -j DNAT --to-destination 10.10.0.2:28967
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 28967 -j DNAT --to-destination 10.10.0.2:28967
PreUp = iptables -t nat -I PREROUTING -i eth0 -p udp --dport 28967 -j DNAT --to-destination 10.10.0.2:28967
PostDown = iptables -t nat -D PREROUTING -i eth0 -p udp --dport 28967 -j DNAT --to-destination 10.10.0.2:28967

# Allow the now-forwarded traffic to pass from the public interface to the VPN interface.
PreUp = iptables -I FORWARD -i eth0 -o %i -m state --state RELATED,ESTABLISHED -j ACCEPT
PreUp = iptables -I FORWARD -i eth0 -o %i -p tcp -d 10.10.0.2 --dport 28967 -j ACCEPT
PreUp = iptables -I FORWARD -i eth0 -o %i -p udp -d 10.10.0.2 --dport 28967 -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -m state --state RELATED,ESTABLISHED -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -d 10.10.0.2 --dport 28967 -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p udp -d 10.10.0.2 --dport 28967 -j ACCEPT

# Allow outbound traffic from the VPN client out to the internet
PreUp = iptables -I FORWARD -i %i -o eth0 -j ACCEPT
PostDown = iptables -D FORWARD -i %i -o eth0 -j ACCEPT

# Perform NAT for traffic from the VPN client going to the internet
PreUp = iptables -t nat -I POSTROUTING -s 10.10.0.2/32 -o eth0 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -s 10.10.0.2/32 -o eth0 -j MASQUERADE

[Peer]
PublicKey = KPAx2vIhCG6YU3X3ftrW76tvuj7zgigmsQZVzf1VfQY=
AllowedIPs = 10.10.0.2/32

Wireguard client config:

Wireguard client config
[Interface]
PrivateKey = YYYYY
Address = 10.10.0.2
        
[Peer]
PublicKey = hxixmr6nAnkDOoDCWxRMVDIgt7YrZDmTcynF+5palSE=
AllowedIPs = 0.0.0.0/0
Endpoint = 46.224.42.102:51820
PersistentKeepalive = 25

Storj node config (config.yaml):

Storj node config (config.yaml)
# how frequently bandwidth usage cache should be synced with the db
# bandwidth.interval: 1h0m0s

# how many expired pieces to delete in one batch. If <= 0, all expired pieces will be deleted in one batch. (ignored by flat file store)
# collector.expiration-batch-size: 1000

# how long should the collector wait before deleting expired pieces. Should not be less than 30 min since nodes are allowed to be 30 mins out of sync with the satellite.
# collector.expiration-grace-period: 1h0m0s

# how many per hour flat files can be deleted in one batch.
# collector.flat-file-batch-limit: 5

# how frequently expired pieces are collected
# collector.interval: 1h0m0s

# delete expired pieces in reverse order (recently expired first)
# collector.reverse-order: false

# use color in user interface
# color: false

# server address of the api gateway and frontend app
console.address: 0.0.0.0:14002

# path to static resources
# console.static-dir: ""

# timeout for the check-in request
# contact.check-in-timeout: 10m0s

# the public address of the node, useful for nodes behind NAT
contact.external-address: 46.224.42.102:28967
# contact.external-address: ""

# how frequently the node contact chore should run
# contact.interval: 1h0m0s

# coma separated key=value pairs, which will be self signed and used as tags
# contact.self-signed-tags: []

# protobuf serialized signed node tags in hex (base64) format
# contact.tags: ""

# Maximum Database Connection Lifetime, -1ns means the stdlib default
# db.conn_max_lifetime: 30m0s

# Maximum Amount of Idle Database connections, -1 means the stdlib default
# db.max_idle_conns: 1

# Maximum Amount of Open Database connections, -1 means the stdlib default
# db.max_open_conns: 5

# address to listen on for debug endpoints
# debug.addr: 127.0.0.1:0

# If set, a path to write a process trace SVG to
# debug.trace-out: ""

# open config in default editor
# edit-conf: false

# if true, force disk synchronization and atomic writes
# filestore.force-sync: false

# in-memory buffer for uploads
# filestore.write-buffer-size: 128.0 KiB

# how often to run the chore to check for satellites for the node to forget
# forget-satellite.chore-interval: 1m0s

# number of workers to handle forget satellite
# forget-satellite.num-workers: 1

# how often to run the chore to check for satellites for the node to exit.
# graceful-exit.chore-interval: 1m0s

# the minimum acceptable bytes that an exiting node can transfer per second to the new node
# graceful-exit.min-bytes-per-second: 5.00 KB

# the minimum duration for downloading a piece from storage nodes before timing out
# graceful-exit.min-download-timeout: 2m0s

# number of concurrent transfers per graceful exit worker
# graceful-exit.num-concurrent-transfers: 5

# number of workers to handle satellite exits
# graceful-exit.num-workers: 4

# if the log file is not this alive, compact it
# hashstore.compaction.alive-fraction: 0.25

# max size of a log file
# hashstore.compaction.max-log-size: "1073741824"

# controls if we collect records and sort them and rewrite them before the hashtbl
# hashstore.compaction.ordered-rewrite: true

# power to raise the rewrite probability to. >1 means must be closer to the alive fraction to be compacted, <1 means the opposite
# hashstore.compaction.probability-power: 2

# multiple of the hashtbl to rewrite in a single compaction
# hashstore.compaction.rewrite-multiple: 10

# if set, call mlock on any mmap/mremap'd data
# hashstore.hashtbl.mlock: true

# if set, uses mmap to do reads
# hashstore.hashtbl.mmap: false

# path to store log files in (by default, it's relative to the storage directory)'
# hashstore.logs-path: hashstore

# if set, call mlock on any mmap/mremap'd data
# hashstore.memtbl.mlock: true

# if set, uses mmap to do reads
# hashstore.memtbl.mmap: false

# number of open file handles to cache for reads
# hashstore.store.open-file-cache: 10

# if set, writes to the log file and table are fsync'd to disk
# hashstore.store.sync-writes: false

# default table kind to use (hashtbl or memtbl) during NEW compations
# hashstore.table-default-kind: HashTbl

# path to store tables in. Can be same as LogsPath, as subdirectories are used (by default, it's relative to the storage directory)
# hashstore.table-path: hashstore

# Enable additional details about the satellite connections via the HTTP healthcheck.
healthcheck.details: false

# Provide health endpoint (including suspension/audit failures) on main public port, but HTTP protocol.
healthcheck.enabled: true

# path to the certificate chain for this identity
identity.cert-path: identity/identity.cert

# path to the private key for this identity
identity.key-path: identity/identity.key

# if true, log function filename and line number
# log.caller: false

# custom level overrides for specific loggers in the format NAME1=ERROR,NAME2=WARN,... Only level increment is supported, and only for selected loggers!
# log.custom-level: ""

# if true, set logging to development mode
# log.development: false

# configures log encoding. can either be 'console', 'json', 'pretty', or 'gcloudlogging'.
# log.encoding: ""

# the minimum log level to log
log.level: info

# can be stdout, stderr, or a filename
log.output: "/app/config/node.log"

# if true, log stack traces
# log.stack: false

# address(es) to send telemetry to (comma-separated)
# metrics.addr: collectora.storj.io:9000

# application name for telemetry identification. Ignored for certain applications.
# metrics.app: storagenode

# application suffix. Ignored for certain applications.
# metrics.app-suffix: -release

# address(es) to send telemetry to (comma-separated IP:port or complex BQ definition, like bigquery:app=...,project=...,dataset=..., depends on the config/usage)
# metrics.event-addr: eventkitd.datasci.storj.io:9002

# size of the internal eventkit queue for UDP sending
# metrics.event-queue: 10000

# instance id prefix
# metrics.instance-prefix: ""

# how frequently to send up telemetry. Ignored for certain applications.
# metrics.interval: 1m0s

# path to log for oom notices
# monkit.hw.oomlog: /var/log/kern.log

# maximum duration to wait before requesting data
# nodestats.max-sleep: 5m0s

# how often to sync storage
# nodestats.storage-sync: 12h0m0s

# operator email address
operator.email: ""

# operator wallet address
operator.wallet: ""

# operator wallet features
operator.wallet-features: ""

# move pieces to trash upon deletion. Warning: if set to false, you risk disqualification for failed audits if a satellite database is restored from backup.
# pieces.delete-to-trash: true

# use flat files for the piece expiration store instead of a sqlite database
# pieces.enable-flat-expiration-store: true

# run garbage collection and used-space calculation filewalkers as a separate subprocess with lower IO priority
# pieces.enable-lazy-filewalker: true

# optional type of file stat cache. Might be useful for slow disk and limited memory. Available options: badger (EXPERIMENTAL)
# pieces.file-stat-cache: ""

# use and remove piece expirations from the sqlite database _also_ when the flat expiration store is enabled
# pieces.flat-expiration-include-sq-lite: true

# number of concurrent file handles to use for the flat expiration store
# pieces.flat-expiration-store-file-handles: 1000

# maximum time to buffer writes to the flat expiration store before flushing
# pieces.flat-expiration-store-max-buffer-time: 5m0s

# where to store flat piece expiration files, relative to the data directory
# pieces.flat-expiration-store-path: piece_expirations

# how often to empty check the trash, and delete old files
# pieces.trash-chore-interval: 24h0m0s

# deprecated
# pieces.write-prealloc-size: 4.0 MiB

# whether or not preflight check for database is enabled.
# preflight.database-check: true

# whether or not preflight check for local system clock is enabled on the satellite side. When disabling this feature, your storagenode may not setup correctly.
# preflight.local-time-check: true

# store reputation stats in cache
# reputation.cache: true

# how often to sync reputation
# reputation.interval: 4h0m0s

# maximum duration to wait before requesting data
# reputation.max-sleep: 5m0s

# path to the cache directory for retain requests.
# retain.cache-path: config/retain

# how many concurrent retain requests can be processed at the same time.
# retain.concurrency: 1

# allows for small differences in the satellite and storagenode clocks
# retain.max-time-skew: 72h0m0s

# allows configuration to enable, disable, or test retain requests from the satellite. Options: (disabled/enabled/debug)
# retain.status: enabled

# public address to listen on
server.address: :28967

# whether to debounce incoming messages
# server.debouncing-enabled: true

# if true, client leaves may contain the most recent certificate revocation for the current certificate
# server.extensions.revocation: true

# if true, client leaves must contain a valid "signed certificate extension" (NB: verified against certs in the peer ca whitelist; i.e. if true, a whitelist must be provided)
# server.extensions.whitelist-signed-leaf: false

# path to the CA cert whitelist (peer identities must be signed by one these to be verified). this will override the default peer whitelist
# server.peer-ca-whitelist-path: ""

# identity version(s) the server will be allowed to talk to
# server.peer-id-versions: latest

# private address to listen on
server.private-address: 127.0.0.1:7778

# url for revocation database (e.g. bolt://some.db OR redis://127.0.0.1:6378?db=2&password=abc123)
# server.revocation-dburl: bolt://config/revocations.db

# enable support for tcp fast open
# server.tcp-fast-open: true

# the size of the tcp fast open queue
# server.tcp-fast-open-queue: 256

# if true, uses peer ca whitelist checking
# server.use-peer-ca-whitelist: true

# total allocated disk space in bytes
storage.allocated-disk-space: 2.00 TB

# path to store data in
# storage.path: config/storage

# how often the space used cache is synced to persistent storage
# storage2.cache-sync-interval: 1h0m0s

# directory to store databases. if empty, uses data path
# storage2.database-dir: ""

# how soon before expiration date should things be considered expired
# storage2.expiration-grace-period: 48h0m0s

# how many concurrent requests are allowed, before uploads are rejected. 0 represents unlimited.
# storage2.max-concurrent-requests: 0

# amount of memory allowed for used serials store - once surpassed, serials will be dropped at random
# storage2.max-used-serials-size: 1.00 MB

# a client upload speed should not be lower than MinUploadSpeed in bytes-per-second (E.g: 1Mb), otherwise, it will be flagged as slow-connection and potentially be closed
# storage2.min-upload-speed: 0 B

# if the portion defined by the total number of alive connection per MaxConcurrentRequest reaches this threshold, a slow upload client will no longer be monitored and flagged
# storage2.min-upload-speed-congestion-threshold: 0.8

# if MinUploadSpeed is configured, after a period of time after the client initiated the upload, the server will flag unusually slow upload client
# storage2.min-upload-speed-grace-duration: 10s

# how frequently to report storage stats to the satellite
# storage2.monitor.interval: 1h0m0s

# how much bandwidth a node at minimum has to advertise (deprecated)
# storage2.monitor.minimum-bandwidth: 0 B

# how much disk space a node at minimum has to advertise
# storage2.monitor.minimum-disk-space: 500.00 GB

# how frequently to verify the location and readability of the storage directory
# storage2.monitor.verify-dir-readable-interval: 1m0s

# how long to wait for a storage directory readability verification to complete
# storage2.monitor.verify-dir-readable-timeout: 1m0s

# if the storage directory verification check fails, log a warning instead of killing the node
# storage2.monitor.verify-dir-warn-only: false

# how frequently to verify writability of storage directory
# storage2.monitor.verify-dir-writable-interval: 5m0s

# how long to wait for a storage directory writability verification to complete
# storage2.monitor.verify-dir-writable-timeout: 1m0s

# how long after OrderLimit creation date are OrderLimits no longer accepted
# storage2.order-limit-grace-period: 1h0m0s

# length of time to archive orders before deletion
# storage2.orders.archive-ttl: 168h0m0s

# duration between archive cleanups
# storage2.orders.cleanup-interval: 5m0s

# maximum duration to wait before trying to send orders
# storage2.orders.max-sleep: 30s

# path to store order limit files in
# storage2.orders.path: config/orders

# timeout for dialing satellite during sending orders
# storage2.orders.sender-dial-timeout: 1m0s

# duration between sending
# storage2.orders.sender-interval: 1h0m0s

# timeout for sending
# storage2.orders.sender-timeout: 1h0m0s

# if set to true, all pieces disk usage is recalculated on startup
# storage2.piece-scan-on-startup: true

# how long to spend waiting for a stream operation before canceling
# storage2.stream-operation-timeout: 30m0s

# file path where trust lists should be cached
# storage2.trust.cache-path: config/trust-cache.json

# list of trust exclusions
# storage2.trust.exclusions: ""

# how often the trust pool should be refreshed
# storage2.trust.refresh-interval: 6h0m0s

# list of trust sources
# storage2.trust.sources: https://static.storj.io/dcs-satellites

# how many pieces to buffer
# storage2migration.buffer-size: 1

# constant delay between migration of two pieces. 0 means no delay
# storage2migration.delay: 0s

# whether to also delete expired pieces; has no effect if expired are migrated
# storage2migration.delete-expired: true

# how long to wait between pooling satellites for active migration
# storage2migration.interval: 10m0s

# whether to add jitter to the delay; has no effect if delay is 0
# storage2migration.jitter: true

# whether to also migrate expired pieces
# storage2migration.migrate-expired: true

# whether to also migrate pieces for satellites outside currently set
# storage2migration.migrate-regardless: false

# if true, whether to suppress central control of migration initiation
# storage2migration.suppress-central-migration: false

# address for jaeger agent
# tracing.agent-addr: agent.tracing.datasci.storj.io:5775

# application name for tracing identification
# tracing.app: storagenode

# application suffix
# tracing.app-suffix: -release

# buffer size for collector batch packet size
# tracing.buffer-size: 0

# whether tracing collector is enabled
# tracing.enabled: true

# the possible hostnames that trace-host designated traces can be sent to
# tracing.host-regex: \.storj\.tools:[0-9]+$

# how frequently to flush traces to tracing agent
# tracing.interval: 0s

# buffer size for collector queue size
# tracing.queue-size: 0

# how frequent to sample traces
# tracing.sample: 0

# Interval to check the version
# version.check-interval: 15m0s

# Request timeout for version checks
# version.request-timeout: 1m0s

# Define the run mode for the version checker. Options (once,periodic,disable)
# version.run-mode: periodic

# server address to check its version against
version.server-address: https://version.storj.io

Docker config:

Docker compose

docker-compose.yml
services:

  storj5:
    build: .
    container_name: storj5
    restart: unless-stopped
    stop_grace_period: 300s
    user: "${UID}:${GID}"
    network_mode: "host"
    environment:
      WALLET: "${WALLET}"
      EMAIL: "${EMAIL}"
      ADDRESS: "46.224.42.102:28967"
      STORAGE: "500GB"
    volumes:
      - /mnt/data/storj5_identity:/app/identity
      - /mnt/data/storj5_data:/app/config

Dockerfile (to have curl and ping tools)

Dockerfile
FROM storjlabs/storagenode:latest

RUN apt update && \
    apt install -y curl iputils-ping && \
    apt clean

Results

When I start the webserver container, I can reach the webserver on port 28967, from an unrelated network (mobile tethering):

jeremy@XPS-15-9520:~$ curl 46.224.42.102:28967
It works

Then, I immediately stop the webserver container and start the Storj node:
docker compose up -d --build

And here are the logs:

Storj node logs
2026-01-10T22:09:37Z	INFO	Configuration loaded	{"Process": "storagenode", "Location": "/app/config/config.yaml"}
2026-01-10T22:09:37Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2026-01-10T22:09:37Z	INFO	Operator email	{"Process": "storagenode", "Address": "jrmy.blanchard@gmail.com"}
2026-01-10T22:09:37Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xD5bB9395dB87015F8240D768ad9575987CEDAF3F"}
2026-01-10T22:09:37Z	INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
2026-01-10T22:09:37Z	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
2026-01-10T22:10:07Z	ERROR	version	failed to get process version info	{"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": dial tcp 34.173.164.90:443: i/o timeout", "errorVerbose": "version checker client: Get \"https://version.storj.io\": dial tcp 34.173.164.90:443: i/o timeout\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tmain.cmdRun:91\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tmain.main:34\n\truntime.main:283"}
2026-01-10T22:10:07Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "12mTiMdhndiXdp2AQPvUuGGq6DNCXUa89yGfNSXvw6E5ex9ng2V"}
2026-01-10T22:10:07Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "12mTiMdhndiXdp2AQPvUuGGq6DNCXUa89yGfNSXvw6E5ex9ng2V"}
2026-01-10T22:10:07Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 62}
2026-01-10T22:10:08Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
2026-01-10T22:12:24Z	ERROR	preflight:localtime	unable to get satellite system time	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "error": "rpc: tcp connector failed: rpc: dial tcp 35.207.121.91:7777: connect: connection timed out", "errorVerbose": "rpc: tcp connector failed: rpc: dial tcp 35.207.121.91:7777: connect: connection timed out\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2026-01-10T22:12:24Z	ERROR	preflight:localtime	unable to get satellite system time	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "rpc: tcp connector failed: rpc: dial tcp 34.2.157.232:7777: connect: connection timed out", "errorVerbose": "rpc: tcp connector failed: rpc: dial tcp 34.2.157.232:7777: connect: connection timed out\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2026-01-10T22:12:24Z	ERROR	preflight:localtime	unable to get satellite system time	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "error": "rpc: tcp connector failed: rpc: dial tcp 35.212.10.183:7777: connect: connection timed out", "errorVerbose": "rpc: tcp connector failed: rpc: dial tcp 35.212.10.183:7777: connect: connection timed out\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2026-01-10T22:12:24Z	ERROR	preflight:localtime	unable to get satellite system time	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "error": "rpc: tcp connector failed: rpc: dial tcp 35.215.108.32:7777: connect: connection timed out", "errorVerbose": "rpc: tcp connector failed: rpc: dial tcp 35.215.108.32:7777: connect: connection timed out\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2026-01-10T22:12:24Z	ERROR	Failed preflight check.	{"Process": "storagenode", "error": "system clock is out of sync: system clock is out of sync with all trusted satellites", "errorVerbose": "system clock is out of sync: system clock is out of sync with all trusted satellites\n\tstorj.io/storj/storagenode/preflight.(*LocalTime).Check:96\n\tstorj.io/storj/storagenode.(*Peer).Run:1100\n\tmain.cmdRun:127\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tmain.main:34\n\truntime.main:283"}
2026-01-10T22:12:25Z	ERROR	failure during run	{"Process": "storagenode", "error": "system clock is out of sync: system clock is out of sync with all trusted satellites", "errorVerbose": "system clock is out of sync: system clock is out of sync with all trusted satellites\n\tstorj.io/storj/storagenode/preflight.(*LocalTime).Check:96\n\tstorj.io/storj/storagenode.(*Peer).Run:1100\n\tmain.cmdRun:127\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tmain.main:34\n\truntime.main:283"}
2026-01-10T22:12:25Z	FATAL	Unrecoverable error	{"Process": "storagenode", "error": "system clock is out of sync: system clock is out of sync with all trusted satellites", "errorVerbose": "system clock is out of sync: system clock is out of sync with all trusted satellites\n\tstorj.io/storj/storagenode/preflight.(*LocalTime).Check:96\n\tstorj.io/storj/storagenode.(*Peer).Run:1100\n\tmain.cmdRun:127\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tmain.main:34\n\truntime.main:283"}

And while the Storj is running with wireguard tunnel up:
The wireguard client has the right IP (wireguard servers’s IP):

jeremy@ubuntulab:~/apps/storj$ curl ipinfo.io
{
  "ip": "46.224.42.102",
  "hostname": "static.102.42.224.46.clients.your-server.de",
  "city": "Falkenstein",
  "region": "Saxony",
  "country": "DE",
  "loc": "50.4779,12.3713",
  "org": "AS24940 Hetzner Online GmbH",
  "postal": "08520",
  "timezone": "Europe/Berlin",
  "readme": "https://ipinfo.io/missingauth"
}

And the Storj container traffic goes through the tunnel too:

jeremy@ubuntulab:~/apps/storj$ sudo docker exec storj5 curl ipinfo.io
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
{ 0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  "ip": "46.224.42.102",
  "hostname": "static.102.42.224.46.clients.your-server.de",
  "city": "Falkenstein",
  "region": "Saxony",
  "country": "DE",
  "loc": "50.4779,12.3713",
  "org": "AS24940 Hetzner Online GmbH",
  "postal": "08520",
  "timezone": "Europe/Berlin",
  "readme": "https://ipinfo.io/missingauth"
}100   319  100   319    0     0    963      0 --:--:-- --:--:-- --:--:--   963

I haven’t read this whole thread: but the clock-out-of-sync log entry will also happen if the wg-server/VPS-system can’t contact the satellites on port 7777. Some VPS providers have blocked outbound connections to 7777.

(so it’s not that the time is incorrect: it’s that the node can’t connect to a satellite to even know what the reference time should be)

Interesting!
Is there anyway I can confirm that the outbound connections to 7777 are blocked?

On Linux systems it’s easy to use netcat/nc. Grab the satellite IPs/ports from your log: like this:

nc -zv 35.215.108.32 7777

It should say something like this if something is connectable on 7777:

Connection to 35.215.108.32 7777 port [tcp/*] succeeded!

If your WG server can’t connect normally itself… then obviously it won’t be able to tunnel it through your VPN either.

When I try nc -zv 34.173.164.90 443 and nc -zv 35.215.108.32 7777 from the wireguard server (withtout the tunnel open, but that shouldn’t impact anything), I don’t have anything (I suppose I cancel the command before timeout).

Though, from the Storj machine (without tunnel), it succeeds. I don’t know what is blocking this traffic.

Is it possible that the IP of my VPS (or an IP range) is blocked by the Storj Labs team to prevent the use of a VPN?

If you use Hetzner - yes, it’s possible. GCP is blocking some IP ranges belongs to Hetzner due to multiple malicious activity.
The only way to bypass it is to change the IP or VPS.

At that point I would like to suggest you to use a VPN with port forwarding option, however, it may work worse in terms of node selection.

Thanks.

Would it make sense to add specific route on the host machine, telling to bypass the wireguard tunnel for these IPs (34.173.164.90 and 35.215.108.32)?

Not really, because also likely audits and repair workers could be not able to connect to your node. Also the same would be applied to the customers from GCP.

If you have a public IP on your host you should not use VPN at all.

3 Likes

Thanks.

So, I decided to use OCI (Oracle Cloud) and it works!

Thank you so much all for your help!
It was really interesting and I learned so much!

ps: I don’t which post I should tag as the solution. So I’ll tag this last one.

3 Likes

So a lot of you guys use Oracle Cloud and a few other clouds, and concentrate a lot of nodes on these VPSes, going against the decentralised purpose of the project. Besides the fact that it suppose to use your personal spare capacity, not some rented cloud storage…
Not helpful for the project, even though many of you say “I like this project and I want to support it, even if I don’t get paid, or the payment is very low”. You just rented someone elses space in the same place were a few thousands did! How is that helping the project?:unamused_face:
But the silver lining is that you get such low ingress compared to nodes on personal IPs/storages. I don’t know were I’m going with this, but I think it’s just surprising to me.

I’m moving to a new place where I can’t do port forwarding.
I’m not using Oracle’s storage. I’m using Oracle to create a tunnel.

1 Like

How so? The packets just go via a different route. Does not affect decentralization in any way.

Purpose of the project is to find use for underutilized capacity. This also remains uninfringed

1 Like

Availability is indeed at risk. But storage on its own—not really, it’s not like Oracle Cloud stores data. Assuming the node operator will promptly set up a tunnel through a different provider, it should be fine. A more sophisticated operator could even do some sort of an automated failover, which would restore availability.

3 Likes

Beware of the traffic fees if you have a CC linked, and should you go over the quota. It can be very very unpleasant :joy:.
You can also setup notifications in there to let you know when approaching set limits - so if you have a CC linked, be sure to do that.
Better to bring the node down instead of bringing the bank account down :grinning_face_with_smiling_eyes:.