Watchtower killing my node

Each time new update come my node goes offline until I manually start it again. It already caused 10% uptime lost and I was forced to disable watchtower.

This node is little bit experimental, and it working via another net container. Why watchtower vanishes is from docker (i can’t see it in docker ps -a)? Shouldn’t it work properly independent to startup command?

Os Ubuntu 18.04 LTS, watchtower image is latest

And there are some logs

What should i do to keep automatic updates on?
Or only possible solution is move to docker-compose?

I have never seen this happen in watchtower it says you have something conflicting so it’s unable to create the node. What did you docker start command look like?

What does your docker run command look like the the storagenode (with private information omitted)?

Here is my docker run command:

sudo docker run \
    --net=container:storj6_net \
    -d --restart unless-stopped --stop-timeout 300 \
    -e WALLET="******************************************" \
    -e EMAIL="***********@****.**" \
    -e ADDRESS="***.***.***.*:*****" \
    -e STORAGE="2.5TB" \
    --mount type=bind,source="/home/danyasworld/.local/share/storj/identity/storj6",destination=/app/identity \
    --mount type=bind,source="/media/storj6",destination=/app/config \
    --name storj6 storjlabs/storagenode:latest

Im going to take a hunch and say this is the reason your having issues. Reason being watchtower has no idea how to handle this command when restarting the node.

I guess so, but the real question is why and ho to deal with it?

Well, instead why dont you run a vpn on the system instead? I mean if that is why your creating a bridge im assuming that is the reason.

Other side vpn is too tiny and one 2,5TB fully utilize it. On this machine also running 6 more nodes, so if i use vpn on system directly they all will use one vpn connection and tiny vpn will be bottleneck. Now my nodes splitted between different vpns, but problems with watchtowers annoys me

Well you do know that is kinda defeating the purpose of running a node if your running all nodes on the same system with each having a vpn and lying to the network that you have nodes in different locations right? If that is the case it kinda is your problem since the way your doing it is not supported. And you will need to update your nodes manually everytime.

My network is under NAT and is unreachable from net. Also, all vpn’s goes to one datacenter with cheap vds (0,75$ each), so they all are under one ip subnet. So, they aren’t in different locations, not for network, not physically.

No I understand but your kinda going the wrong way about it…taking advantage of it and doing it that way your trying to make 6 times the amount you would normally get. You say that but I kinda know the real reason or you wouldn’t be doing it with more then one node.

I know that total profit don’t depend on location of node. Also i know, that it affects ingress speed, so more ip’s from different subnets, the more ingress and faster profit. This way is totaly defating te purpose of running node. But if they under one subnet they also splitting ingress. So, if you think i’m trying gain more profit - you are wrong. For more profit - more drives, it is pretty clear.

It’s time to change provider :smiley: .

Hey im not here to judge its on you how you wanna run your nodes. But im just pointing out the obvious hince you said you didn’t want to run a vpn on the system cause each of your nodes are using a vpn.

But I can’t say I support any vpn of any kind for this reason. Because I have to run all my nodes on 1 subnet while people who use vpns get to hide there true locations and run many more nodes and try to take advantage of it.

That’s exactly the issue, but since storjlabs/watchtower is 250+ commits behind the upstream project, they don’t have the PR that fixed this issue incorporated in their image.

@DanyaSWorlD, consider using containrrr/watchtower for your nodes routed through VPN. Your nodes will check for an update at a specified frequency rather than at a random time from a 72 hour window, so you’ll slightly hinder the goal of randomizing node updates for Docker SNOs, but it’s better than having a successful upgrade failing to restart properly.

If you do this please set a low update check frequency. Like once every 2 days or something. It’s always preferred to use the storj provided one as they may make future changes required to keep things working.

It would be best if Storj just merges this fix into their version.

I know I’ll get flamed for this, but this is exactly what I do, using the upstream watchtower image with a low update check frequency.

I haven’t gotten over the bad experience I had with storjlabs/watchtower on my ARM box. I’m sure it’s probably fixed by now, but I haven’t run it since.

The only functional difference in the storjlabs/watchtower image is that it hardcodes a check interval to be a random time between 12 and 72 hours:

Apart from that change, the storjlabs/watchtower image hasn’t been updated for almost a year. My belief system is that if I use the containrrr/watchtower image with a 2-3 day check frequency, I’m meeting the intent of the rollout process while using a tool that’s better supported and updated. The issue raised by the OP is one example of a benefit of using the upstream image.

1 Like