Failed to start container (only on OS startup)

Sorry for the bother my linux (ubuntu noble) skills are mediocre. My container fails to startup automatically with the OS due to the following error. Strangely if I run docker start storagenode it starts just fine. My mount is present in fstab.

When I run “docker container inspect storagenode”

“ExitCode”: 127,
“Error”: “failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/mnt/exos16tb/storj/storagenode1/identity/storagenode" to rootfs at "/app/identity": create mount destination for /app/identity mount: bind mount source stat: stat /mnt/exos16tb/storj/storagenode1/identity/storagenode: no such file or directory: unknown”,

fstab:
/dev/sda1 /mnt/exos16tb auto nosuid,nodev,nofail,x-gvfs-show 0 0

Hello @Tempest,
Welcome back!

Please show the result of the command:

df --si -T

Thanks Alexey in advance

Filesystem Type Size Used Avail Use% Mounted on
tmpfs tmpfs 817M 2.3M 815M 1% /run
rpool/ROOT/ubuntu_vu93vu zfs 213G 3.8G 209G 2% /
tmpfs tmpfs 4.1G 0 4.1G 0% /dev/shm
tmpfs tmpfs 5.3M 8.2k 5.3M 1% /run/lock
efivarfs efivarfs 197k 94k 99k 49% /sys/firmware/efi/efivars
rpool/USERDATA/jpcpro_lt0kbl zfs 210G 323M 209G 1% /home/jpcpro
rpool/USERDATA/root_lt0kbl zfs 209G 263k 209G 1% /root
rpool/ROOT/ubuntu_vu93vu/var/lib zfs 232G 23G 209G 10% /var/lib
rpool/ROOT/ubuntu_vu93vu/var/games zfs 209G 132k 209G 1% /var/games
rpool/ROOT/ubuntu_vu93vu/usr/local zfs 209G 263k 209G 1% /usr/local
rpool/ROOT/ubuntu_vu93vu/srv zfs 209G 132k 209G 1% /srv
rpool/ROOT/ubuntu_vu93vu/var/log zfs 210G 209M 209G 1% /var/log
rpool/ROOT/ubuntu_vu93vu/var/mail zfs 209G 132k 209G 1% /var/mail
rpool/ROOT/ubuntu_vu93vu/var/snap zfs 209G 3.1M 209G 1% /var/snap
rpool/ROOT/ubuntu_vu93vu/var/www zfs 209G 132k 209G 1% /var/www
rpool/ROOT/ubuntu_vu93vu/var/spool zfs 209G 132k 209G 1% /var/spool
rpool/ROOT/ubuntu_vu93vu/var/lib/AccountsService zfs 209G 132k 209G 1% /var/lib/AccountsService
rpool/ROOT/ubuntu_vu93vu/var/lib/NetworkManager zfs 209G 263k 209G 1% /var/lib/NetworkManager
rpool/ROOT/ubuntu_vu93vu/var/lib/apt zfs 209G 120M 209G 1% /var/lib/apt
rpool/ROOT/ubuntu_vu93vu/var/lib/dpkg zfs 209G 48M 209G 1% /var/lib/dpkg
bpool/BOOT/ubuntu_vu93vu zfs 1.9G 202M 1.7G 11% /boot
/dev/nvme0n1p1 vfat 536M 16M 521M 3% /boot/efi
tmpfs tmpfs 817M 103k 817M 1% /run/user/128
rpool/ROOT/ubuntu_vu93vu/var/lib/1e8e8de80be1d12a3f1e45e9c72f1acda18f6a6ce96675e 08722748767e63263 zfs 210G 165M 209G 1% /var/lib/docker/zfs/graph/1e8e 8de80be1d12a3f1e45e9c72f1acda18f6a6ce96675e08722748767e63263
/dev/sda1 xfs 16T 7.1T 9.0T 44% /mnt/exos16tb
rpool/ROOT/ubuntu_vu93vu/var/lib/31faff44a4bf85ef2ca6ceaadc42c6f801808b5bc992ccc 86dc3ce4ae23ddbcc zfs 210G 183M 209G 1% /var/lib/docker/zfs/graph/31fa ff44a4bf85ef2ca6ceaadc42c6f801808b5bc992ccc86dc3ce4ae23ddbcc
tmpfs tmpfs 817M 91k 817M 1% /run/user/1000

Seems it’s mounted. But perhaps it’s mounted after the docker daemon started. Sounds like you use NoRAID, they mount drives when it’s fully booted and user login but after the docker daemon is started, or perhaps something changed after the OS upgrade.
You need to fix that, please try this:

  1. You can list mount units using systemctl list-unit-files | grep ".mount". For example, a mount point at /mnt/data might correspond to mnt-data.mount.
  2. You need to modify the docker service with sudo systemctl edit docker.service. This creates an override file, preventing direct modification of the original systemd file.
  • Under the [Unit] section of the override file, add Requires= and After= entries for your specific mount unit. Replace your-mount-unit.mount with the actual name identified in step 1.
[Unit]
Requires=your-mount-unit.mount
After=your-mount-unit.mount
  1. Save the changes to the override file and then reload the systemd daemon to apply the new configuration:
sudo systemctl daemon-reload
  1. Restart the Docker service to ensure the new dependencies are in effect:
sudo systemctl restart docker

Perhaps in your system you would need to replace docker with containerd in the commands above.

By the way, do you use -v or docker-compose analogue of

    volumes:
      - /mnt/exos16tb/storj/storagenode1/identity:/app/identity
      - /mnt/exos16tb/storj/storagenode1/config:/app/config

instead of a bind mount? Because if yes, this may produce this behavior too.

1 Like

That worked using the override file, thank you. Impressive linux knowledge as always helping everyone here to great lengths. To answer your question I use the –mount flag in my docker run command line.

1 Like