ERROR contact:service ping satellite failed?

Sure. And the journal prints errors like these on container start.

systemd-udevd[4611]: veth9c4b829: Failed to get link config: No such device

Well…

Is there a docker0 interface listed on the host machine?

ip a |grep docker

If so, what does docker say is connected to its bridge?

docker network inspect bridge
1 Like

Started from scratch. OS, everything… Running 1 node now.

Yes.

[
    {
        "Name": "bridge",
        "Id": "33d0deb4977d57fd9fd5372b2651d8e59785da5477965a0efc770e1f9397a381",
        "Created": "2022-01-12T05:43:25.698917804Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "336bde67faa28fe5fe09073bb3d4acb1178749010ee560bc2bae4b7cce966909": {
                "Name": "node1",
                "EndpointID": "d45bf0e1bd761b5d276d9cb30840fb81b1faf2c29251ca49387218da09e9ea27",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Will add more nodes again today.

What bothers me today is the netplan complaining about default route consistency. Just installed Ubuntu.
Netplan as configured during install:

    # This is the network config written by 'subiquity'
network:
  ethernets:
    eno1:
      addresses:
      - 11.22.33.210/24
      - 1a2a:3a4a:5a6a:16::601/64
      gateway4: 11.22.33.1
      gateway6: 1a2a:3a4a:5a6a:16::1
      nameservers:
        addresses:
        - 1.1.1.1
        - 1.0.0.1
        - 8.8.8.8
        - 8.8.4.4
        - 2606:4700:4700::1111
        - 2606:4700:4700::1001
        - 2001:4860:4860::8888
        - 2001:4860:4860::8844
      accept-ra: no
    eno2:
      addresses:
      - 22.33.44.210/24
      - 1a2a:3a4a:5a6a:17::602/64
      gateway4: 22.33.44.1
      gateway6: 1a2a:3a4a:5a6a:17::1
      nameservers:
        addresses:
        - 1.1.1.1
        - 1.0.0.1
        - 8.8.8.8
        - 8.8.4.4
        - 2606:4700:4700::1111
        - 2606:4700:4700::1001
        - 2001:4860:4860::8888
        - 2001:4860:4860::8844
      accept-ra: no
    eno3:
  addresses:
  - 33.44.55.210/24
  - 1a2a:3a4a:5a6a:18::603/64
  gateway4: 33.44.55.1
  gateway6: 1a2a:3a4a:5a6a:18::1
  nameservers:
    addresses:
    - 1.1.1.1
    - 1.0.0.1
    - 8.8.8.8
    - 8.8.4.4
    - 2606:4700:4700::1111
    - 2606:4700:4700::1001
    - 2001:4860:4860::8888
    - 2001:4860:4860::8844
    - 2606:4700:4700::1111
    - 2606:4700:4700::1001
    - 2001:4860:4860::8888
    - 2001:4860:4860::8844
  accept-ra: no
eno4:
  addresses:
  - 44.55.66.210/24
  - 1a2a:3a4a:5a6a:19::604/64
  gateway4: 44.55.66.1
  gateway6: 1a2a:3a4a:5a6a:19::1
  nameservers:
    addresses:
    - 1.1.1.1
    - 1.0.0.1
    - 8.8.8.8
    - 8.8.4.4
    - 2606:4700:4700::1111
    - 2606:4700:4700::1001
    - 2001:4860:4860::8888
    - 2001:4860:4860::8844
  accept-ra: no
  version: 2

Ran this

echo 101 eno1-route >>/etc/iproute2/rt_tables
echo 102 eno2-route >>/etc/iproute2/rt_tables
echo 103 eno3-route >>/etc/iproute2/rt_tables
echo 104 eno4-route >>/etc/iproute2/rt_tables

rc.local

#!/bin/bash 

# Increasing The Transmit Queue Length
/sbin/ifconfig eno1 txqueuelen 10000
/sbin/ifconfig eno2 txqueuelen 10000
/sbin/ifconfig eno3 txqueuelen 10000
/sbin/ifconfig eno4 txqueuelen 10000
/sbin/ifconfig lo txqueuelen 10000
#routes
ip route add default via 11.22.33.1 dev eno1 table eno1-route
ip rule add from 11.22.33.210 lookup eno1-route
ip route add default via 22.33.44.1 dev eno2 table eno2-route
ip rule add from 22.33.44.210 lookup eno2-route
ip route add default via 33.44.55.1 dev eno3 table eno3-route
ip rule add from 33.44.55.210 lookup eno3-route
ip route add default via 44.55.66.1 dev eno4 table eno4-route
ip rule add from 44.55.66.77 lookup eno4-route

Getting this error when applying netplan.

** (generate:5742): WARNING **: 05:13:22.079: Problem encountered while validating default route consistency.Please set up multiple routing tables and use `routing-policy` instead.
Error: Conflicting default route declarations for IPv4 (table: main, metric: default), first declared in eno2 but also in eno4

All IPs both IPv4 and IPv6 seem pinglable after reboot. Ports are open. What is going on? I already have different routing tables. Conflicting default route declarations!?

ip route list returns

default via 22.33.44.1 dev eno2 proto static 
default via 33.44.55.1 dev eno3 proto static 
default via 11.22.33.1 dev eno1 proto static 
default via 44.55.66.1 dev eno4 proto static 
44.55.66.0/24 dev eno4 proto kernel scope link src 44.55.66.210 
11.22.33.0/24 dev eno1 proto kernel scope link src 11.22.33.210 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
22.33.44.0/24 dev eno2 proto kernel scope link src 22.33.44.210 
33.44.55.0/24 dev eno3 proto kernel scope link src 33.44.55.210

Currently generating identity for node2 and waiting for it, so I can set it up… Then 3 and 4…

1 Like

Update…

Node 1 - live
Node 2 - live
Node 3 - generating identity
Node 4 - live

Let’s home 3 will start, too, when the CPUs finish generating the identity… :smiley: Stuck at difficulty 35 all day now. The rest took several minutes to generate. Strange luck.

UPDATE

Node 3 fails

ERROR contact:service ping satellite failed {“Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “attempts”: 8, “error”: “ping satellite: check-in ratelimit: node rate limited by id”, “errorVerbose”: “ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:138\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}

I start it with…

docker run -d --restart unless-stopped --stop-timeout 300 -p 28969:28967/tcp -p 28969:28967/udp -p 33.44.55.66:14002:14002 -e WALLET="XXX" -e EMAIL="mail@domain.com" -e ADDRESS="node3.domain.com:28969" -e STORAGE="13TB" --mount type=bind,source="/root/.local/share/storj/identity/node3",destination=/app/identity --mount type=bind,source="/path/to/STORJ/node3",destination=/app/config --name S06n3 storjlabs/storagenode:latest

At the same time…
nc -vz node3.domain.com 28969
Connection to node3.domain.com 28969 port [tcp/*] succeeded!

Pretty much like the other 3, but this one fails. OK… Weird. I see only 3 vethXXXXXXX devices.
:rofl:

UPDATE: server reboot… Random number of nodes start like only 4, only 1 and 4, 1, 2, 4… I begin to dislike docker… A lot!

Docker sucks balls… Configured the nodes as systemd services and they all work. What I believe the problem is - docker fails to route node private address to node public address. Also it appears port 7777 had to be open, so nodes could sync clocks with satellites.

2 Likes

i haven’t seen any behavior from docker like that… the clock being not correct will certainly get you tho… been there for various reasons lol a couple of times.

1 Like

Same, but I never liked docker to begin with…

nobody forces you to use docker, there are binaries for installing the node directly on a machine.
for most of us docker just makes life a lot easier.

2 Likes

Sure. I am saying it does not properly route in a scenario where one has multiple nodes on the same machine. When configured as a service, I just went to the config file of the storage node and set a private address that won’t change. Docker changes it however it wants and routes however it wants, screws up iptables and ha-ha… Not my thing. I am not going to manually run after docker’s crap on each reboot. :crazy_face:

maybe its because i don’t have any iptables running on the host… so would make sense docker doesn’t get in the way of that.

On the contrary. It should because on each container it adds it’s own virtual ethernet device with veth driver and applies iptables rules to it.

The problem isn’t docker then…

I suspect the problem has to do with timing of the NIC cards and the naming thereof on boot… which is why systemd worked.

Sure. This is why when not using docker all nodes on the system work at the same time, through a firewall and across reboots, which is what I want and tried to achieve for 3 days with docker as per documentation, except on the same system/machine and it did not work. Right…