Weird behavior: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Hi everyone, new user but was on rocketchat before…
I have been running my node since may’19 I believe, everything went fine.
Now I have no idea how to fix my issue.
I run my node on Debian 9.9.0, I am not very much linux savvy but I try.
I have Watchtower installed and use ssh to check the dashboard.
When I came back today I noticed that my ssh dashboard was terminated, as it would do when there was an update from Watchtower. However it was not it. I tried to start the dashboard and it throws `root@flux:/var/lib/docker/overlay2# docker exec -it storagenode /app/dashboard.sh
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

So top shows that the dockerd, containerd and storagenode are running.
I tried to reboot and got a timeout on trying to unmount the whole bunch of overlay2/…/merged folders.
I had never seen those before. After reboot the same daemons are running but I still get the Cannot connect to Docker error. It seems that for some reason some folders/mount points would be messed up. Look at everything in df -h :

overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/22971e9dbc2adb453fc278d027500904608a53e4e77bad8bb3a37b21adb9ef98/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/24a0641cd8707c01a4bbb7252640186ab3ec0ac0f647d95d92555642faa92e47/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/21fb9c1a9b73a5368f87a9ce21b91407b479e8bdb7f84de23e2bbf9ae1a13115/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/f9e3444713604491ecfa9c5048feb33dffe8edf0c811b72c0541c8a8ba040b24/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/d6fce5a133b792ff0dd0bde89867dcd140953f1a0da85b1784819a5a993402ce/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/9b6a3bbf0d88a6c05c2752dde9a6121845abaa6b12534990d8c94eec76d73078/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/16f8d74e9c89ac5789c0f70534d60c192b5b0de6b5e70411f192e75dd7e28ae5/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/7f1c9571038d3364763cdf2d6eec82dd9a59d1ef3eaf7fc0b1d1aad1f86ef9a6/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/cfbbeae0f607ddf4e0533bb71f0b1297cd3db3e46f84b4297fb3b0b5c003f9f6/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/6c476e7d470be8784d90e345b141457c6c4740bd68b1b630302cab2df6ad190a/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/aedc507df031f70e2fedfa1048abc67b3742a9e7f3f4a7c84f7f0493ee4bbc23/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/fced116e1a8bbaf8045d8f890f00347c197714d5451d83c74b8ac25070b9ba38/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/e73f86f8f45392ffba5bbe059e560f982f8c9edacc53045893fd33a9bfc518ce/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/21bf1ebb0aaca0b17ab738f34ec11452fc724bf4731589a4a416101b363b970a/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/a61d769916a2fc305ce16a0cbba6370206d687b92b6602e71a265026891e8c26/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/50381dfc3fdbcd0de001a18f20ab0edadd6cdfaaf36a8db44f982549f3c43beb/merged
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/e64db6e3cc52c20a72fbd803221cc8841d57b1bc0325ecce6da015934c7fa7e1/merged
shm              64M     0   64M   0% /var/lib/docker/containers/49318e9ed43bf6e3b57e5575d52e5ee635f072a6add64af7233fc91cae396749/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/9f1843d115f2c8dc8b8e4155e24498adbde4cd68250c418f908cdc30b5ffc943/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/a69a4d5ed0e928837523cd15dcf1eccf44caca6f6c03d4114446f6a1a96b8c41/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/ac1b4f396a304dbf415bee7545cadb7a68cb8ba7943b8f23b0ec91a7fff4a3f4/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/c8953d47c04302e64d300a639970e2e3bdabe90f9646e0a10f681d2a0af51ffd/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/d272082f5fb7806195c48e20e693b8b7046f4b4d3f307ab8a13d5e0a1c1156a5/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/6fb673dfe203ddb79421f0763cc8a7492545808c2a2938ccde66fb85f4549c99/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/2b8154f5d1e563c28452d024b01f35884752e396a76633e928075abb60c2762d/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/34fd25b9c6e44461aaafc6d7b4fcaa64e3c5d70da16bf2cf66bac06b8e07789c/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/ed4274e1a30263fb23bb0f2111740b871149da044296058920bf4af1155f631e/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/6d92fc27b7dcc5d8565b6cb60ac87b9ec86fcc7d1efa25faff7fa781ae5f9246/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/fbe001588940eb7fddaa3a94a1d055d40839294dae6e9d144aa77ac1840d1cdc/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/b4be0d9fbba6d8fcc29fdefd2a751c0582a75e5ab63c0059782685723d16a7f8/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/b5f56240031931e9473d28c0d49562f8331b0e409773acfa2320d7a59cf169e9/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/66e40e73704b15e30a45ce213e45aa8a366458b43c8ee5434954abd239e187df/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/5424b2606fd0e28c9856a3e983099855f269717c0cd447ca21a50698e8856207/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/40b54d46969c068a032e76c2c695d9b3c62afad6927ad707d2c4f3d38de975c2/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/db5f1e90c38c53bba192cf3414c647dfe6f16df2859cd28a2f49d76dfdc3798f/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/f503815f1730de60b70fcd616f665c40ed0910e984e5208779816a94806b8940/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/58f242f8dbd53b28bccd57996a871aef9186089346f6f6c4c845cc7646fa4db0/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/01f15e27969fcdba59539123a6be5604907c041d9d4abc1cb08d7ee7ce19f3e2/mounts/shm

overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/3c8cb4e4caa1f736dc98ee0ce7aefeb91fc3d850a22698c4afa84aaae94cd0d7/merged
'overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/678ddb39ea8686d718b54832375dba76dc183218eb5fccf5223da5df9a2835c8/merged
shm              64M     0   64M   0% /var/lib/docker/containers/00fb5c4d0dfb064aeac89dac3d788304654cca073eb9d76706d2aec2d47cb466/mounts/shm
overlay          35G   26G  7.6G  78% /var/lib/docker/overlay2/d3b4055d37dc24d2409e6d9da54d7cc5576f8185a38a3dbb1c5cd1ab476c5438/merged
shm              64M     0   64M   0% /var/lib/docker/containers/56a9ffade6199aee3f2bbaf3f3c8101a911051ecfd8d289b3d47823405337547/mounts/shm

I hope the formatting will come out right… I had to remove alot so I could post but there’s 900 of those.
So whenever I restart the computer now it gets stuck on some containers trying to unmount them, like I said theres about 900 of those and I wonder if this is normal behavior?
I could not upgrade docker through apt upgrade I had to kill the storagenode, dockerd and containerd processes running otherwise it would just remain stuck. Here is the output of dpkg :

Job for docker.service canceled.
invoke-rc.d: initscript docker, action "start" failed.
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: deactivating (stop-sigterm)
     Docs: https://docs.docker.com
 Main PID: 25742 (dockerd)
    Tasks: 759
   Memory: 285.3M
      CPU: 27min 54.031s
   CGroup: /system.slice/docker.service
           └─25742 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Feb 23 21:12:06 flux dockerd[25742]: time="2020-02-23T21:12:06.084498455-05:00" level=info msg="ignoring event" module=libcontain…askDelete"
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.357226020-05:00" level=info msg="ignoring event" module=libcontain…askDelete"
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.602432900-05:00" level=error msg="stream copy error: read /proc/se…dy closed"
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.602513829-05:00" level=error msg="stream copy error: read /proc/se…dy closed"
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.602594562-05:00" level=info msg="blockingPicker: the picked transp…odule=grpc
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.602813537-05:00" level=error msg="failed to get event" error="rpc …space=moby
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.603582879-05:00" level=error msg="stream copy error: reading from …osed fifo"
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.604241019-05:00" level=error msg="stream copy error: reading from …osed fifo"
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.604270810-05:00" level=error msg="failed to get event" error="rpc …ugins.moby
Feb 23 21:12:07 flux dockerd[25742]: time="2020-02-23T21:12:07.647995507-05:00" level=info msg="Processing signal 'terminated'"
Hint: Some lines were ellipsized, use -l to show in full.
dpkg: error processing package docker-ce (--configure):
 subprocess installed post-installation script returned error exit status 1

I did let it try about 30 minutes before killing the pid’s.

I think that storagenode is working but I have no way of knowing as I cannot launch the dashboard in ssh nor in a browser.

I need help peeps! Don’t want to loose my node!

Do you use -v or -- mount in your docker run command ?

docker run -d --restart unless-stopped -p 28967:28967     -e WALLET=""     -e EMAIL=""     -e ADDRESS=":28967"     -e BANDWIDTH="11TB"     -e STORAGE="10TB"     -v "/home/flow/.local/share/storj/identity/storagenode":/app/identity     -v "/mnt/16TB/Storj":/app/config     --name storagenode storjlabs/storagenode:alpha

This is my run command minus wallet, email.
I do use -v for /mnt…

I still have storagenode running as a process, does this mean that it is running?
How can I verify if I can’t run any docker commands?

That -v is the issue here. Docker run command has since been updated.

docker run -d --restart unless-stopped -p 28967:28967 -p 127.0.0.1:14002:14002 -e WALLET=“0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX” -e EMAIL="user@example.com" -e ADDRESS=“domain.ddns.net:28967” -e BANDWIDTH=“20TB” -e STORAGE=“2TB” --mount type=bind,source=“”,destination=/app/identity --mount type=bind,source=“”,destination=/app/config --name storagenode storjlabs/storagenode:beta

Reference:

1 Like

Oh ok, I will change that, but this would fix the folders/mount issue I guess?

How would I prevent it from starting on boot?

What do you want to prevent ?

Well right now from a fresh reboot I cannot enter any docker commands…

You should immediately stop your node, remove the container and update the docker run command. You are risking your data by using -v option.

docker stop -t 300 storagenode
docker rm storagenode
enter your updated docker run command with --mount option

Yes but I cannot run ANY docker commands!
I figured if I prevent docker to start automatically I could update it and it’s run command.

See, the daemons and containers are working, even watchtower, but I cannot enter docker commands, the answer is always :

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Tasks: 469 total,   2 running, 467 sleeping,   0 stopped,   0 zombie
%Cpu(s): 30.6 us, 14.7 sy,  0.0 ni, 42.3 id, 12.2 wa,  0.0 hi,  0.2 si,  0.0 st
KiB Mem : 16466128 total,  5812148 free,  3360252 used,  7293728 buff/cache
KiB Swap:  1007612 total,  1007612 free,        0 used. 12579848 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
  525 root      20   0  806524 268400  14060 R  89.1  1.6  51:37.80 NetworkManager
    1 root      20   0  146212  14128   5320 S  40.9  0.1  40:33.64 systemd
 6688 flow      20   0   71332  12704   5608 S  38.0  0.1  36:48.14 systemd
  694 lightdm   20   0   71288  12680   5592 S  37.0  0.1  36:43.41 systemd
  727 root      20   0 7694092 146312  46056 S  12.2  0.9  14:50.02 dockerd
  560 root      20   0 5652708  67040  28732 S   4.6  0.4   5:14.24 containerd
 4342 root      20   0  140504  43992  19532 S   1.0  0.3   4:56.37 storagenode
  306 root      20   0   78976  21780  21236 S   2.0  0.1   2:36.51 systemd-journal
  518 message+  20   0   45488   4352   3492 S   1.7  0.0   1:56.62 dbus-daemon
  335 root      20   0   46820   4440   2852 S   1.3  0.0   1:22.49 systemd-udevd
  513 root      20   0   46420   4852   4276 S   1.0  0.0   0:56.47 systemd-logind
 8436 flow      20   0   45460   4364   3172 R   1.0  0.0   0:43.87 top
13539 root      20   0 1172604 895888  10528 S   0.3  5.4   0:34.79 packagekitd
  515 root      20   0  250116   4100   2584 S   0.0  0.0   0:30.55 rsyslogd
  514 avahi     20   0   48160   4400   3104 S   0.3  0.0   0:27.85 avahi-daemon
  198 root      20   0       0      0      0 D   0.7  0.0   0:16.57 kworker/u16:8
  230 root       0 -20       0      0      0 S   0.0  0.0   0:16.45 kworker/6:1H
  278 root      20   0       0      0      0 S   0.3  0.0   0:15.04 jbd2/sdc3-8
    7 root      20   0       0      0      0 S   0.3  0.0   0:13.07 rcu_sched
  527 root      20   0  422608   9016   7228 S   0.3  0.1   0:11.54 ModemManager
 8718 root      20   0       0      0      0 S   0.0  0.0   0:09.34 kworker/u16:0
  203 root      20   0       0      0      0 S   0.0  0.0   0:05.84 kworker/3:2
25156 root      20   0       0      0      0 S   0.0  0.0   0:05.31 kworker/6:4
20961 root      20   0       0      0      0 S   0.0  0.0   0:05.23 kworker/1:0
16705 root      20   0       0      0      0 S   0.3  0.0   0:05.05 kworker/u16:1
   46 root      20   0       0      0      0 S   0.0  0.0   0:04.98 ksoftirqd/6
14532 root      20   0       0      0      0 S   0.0  0.0   0:04.49 kworker/4:1
28419 root      20   0       0      0      0 S   0.3  0.0   0:03.39 kworker/0:2
 3704 root      20   0       0      0      0 S   0.0  0.0   0:03.38 kworker/u16:2
12420 root      20   0       0      0      0 S   0.0  0.0   0:03.26 kworker/4:2
  699 lightdm   20   0  639056  74792  48896 S   0.0  0.5   0:02.95 lightdm-gtk-gre
   16 root      20   0       0      0      0 S   0.0  0.0   0:02.93 ksoftirqd/1
16811 root      20   0       0      0      0 S   0.0  0.0   0:02.85 kworker/1:3
31170 root      20   0       0      0      0 S   0.0  0.0   0:02.15 kworker/7:1
    3 root      20   0       0      0      0 S   0.0  0.0   0:02.05 ksoftirqd/0
   22 root      20   0       0      0      0 S   0.0  0.0   0:01.96 ksoftirqd/2
   34 root      20   0       0      0      0 S   0.0  0.0   0:01.93 ksoftirqd/4
24784 root      20   0       0      0      0 S   0.0  0.0   0:01.86 kworker/5:1
14004 root      20   0       0      0      0 S   0.0  0.0   0:01.84 kworker/6:0
  577 root      20   0  389140  53160  35636 S   0.0  0.3   0:01.72 Xorg
   28 root      20   0       0      0      0 S   0.0  0.0   0:01.64 ksoftirqd/3
25183 root      20   0       0      0      0 S   0.3  0.0   0:01.60 kworker/7:2
   40 root      20   0       0      0      0 S   0.0  0.0   0:01.58 ksoftirqd/5
   52 root      20   0       0      0      0 S   0.0  0.0   0:01.54 ksoftirqd/7
23532 root      20   0       0      0      0 S   0.3  0.0   0:01.47 kworker/5:3
17444 root      20   0       0      0      0 S   0.0  0.0   0:01.26 kworker/3:3
29659 root      20   0       0      0      0 S   0.0  0.0   0:01.25 kworker/7:4
  549 vnstat    20   0    7336    896    824 S   0.0  0.0   0:01.23 vnstatd
13428 root      20   0       0      0      0 S   0.0  0.0   0:01.21 kworker/0:3
13625 root      20   0       0      0      0 S   0.3  0.0   0:00.99 kworker/2:4
  237 root       0 -20       0      0      0 S   0.0  0.0   0:00.91 kworker/5:1H
17206 root      20   0       0      0      0 S   0.0  0.0   0:00.90 kworker/2:1
 7003 root      20   0       0      0      0 S   0.0  0.0   0:00.70 kworker/5:4
 7062 flow      20   0  101448   4940   3900 S   0.0  0.0   0:00.70 sshd
 3302 root      20   0       0      0      0 S   0.3  0.0   0:00.61 kworker/4:3
  232 root       0 -20       0      0      0 S   0.0  0.0   0:00.51 kworker/0:1H
19050 root      20   0       0      0      0 S   0.0  0.0   0:00.48 kworker/3:1
  231 root       0 -20       0      0      0 S   0.0  0.0   0:00.47 kworker/7:1H

Try reinstalling docker.

I am trying to update it it remains at 0% until I kill storagenode,containerd and then dockerd
But it fails because it needs containerd or dockerd to be running, at least when I kill everything then apt-get upgrade to update docker-ce, dockerd will show up in top along with containerd.

I can’t seem to update but maybe I could prevent it from launching? then upgrade?
Could I uninstall it without loosing data?

Yes, node’s data is stored outside docker so removing or upgrade docker does not affect data. In your case it might affect data since you kept using -v flag. When docker isn’t able to locate mounted drive it starts downloading data in overlay inside docker. When you restart docker/pc this data is lost which results in failed audits.

Do you have disk space? which shows for example:
df -h /dev/sda1

It seems to me that the docker does not start due to lack of disk space …

1 Like

Based on what he pasted earlier, there is ~20% free space on the disk that docker is using for container storage, but 0% free space in the shared memory volume (usually /dev/shm). This might be the problem – 64MB capacity for the shm volume seems very low. How much RAM does this system have?

Alright, so I uninstalled docker, rm -rf the /var/lib/docker folder
Rebooted reinstalled, adjust a few double quotes that were not right and my dashboard shows online!
Uptime and audits shows **98.8%**99.1% hopefully I didnt loose my node!

It now has 16h uptime.

@cdhowie Yes the 20% is used space and the /dev/shm has 0% used space.
System memory is 16gb as shown in the top command screenshot above.

I was able to use the new run command with --mount instead of -v, now does this in itself contains the problem in that it shouldn’t happen again?

I can’t even post before 24h has passed, went over my daily limit for a new old user… I wish admins to see this and maybe help to remediate this as in an emergency situation where I needed support I could not have a discussion with anyone after 22 thread replies, 2 thread created and 2 messages sent to someone, those are the limits for new users. I get it, we have to protect the forum from spamming but when this hinders the support one can get, it may have to be reviewed, just my opinion!

1 Like