Migrated node after a week of downtime, now it won't start

If you look at one of my earlier posts, you’ll see that config.yaml hasn’t been touched since October. Additionally, when I look at it, almost everything is commented out/default. I’m guessing it was created initially by the docker container and I never touched it.

When I mv revocations.db revocations.db.old it looks like revocations.db is NOT recreated. I’m going to try again after chmod -R 777 per mus6677’s suggestion.

update: no change

Can you try to just run docker without using compose and see if it works?

The owner of the mount point is media, but all folders and data has root as owner. You either should change owner to the media and configure docker to run without sudo, or use sudo to start node.

Please, also rename or remove the config.yaml file, add to your environment: section of docker-compose.yaml the variable SETUP=true, start node once, it should exit immediately. Then remove this variable and try to start again.

No change. Still getting the disk I/O error: no such device – the new config.yaml file was created as -rw------- owned by root

This is strange. Your old and new disks at the /data mount point are just hard drives or something more complex?

Old was a lizardfs configuration on the single node, comprised of four local chunkservers, one for each disk

New is the same four drives now using mergerfs to combine them. The filesystem on the drives is xfs

Please, show result of the command

$ df -HT
~$ df -HT
Filesystem     Type           Size  Used Avail Use% Mounted on
udev           devtmpfs       6.3G     0  6.3G   0% /dev
tmpfs          tmpfs          1.3G  9.2M  1.3G   1% /run
/dev/sda2      btrfs          1.1T  208G  775G  22% /
tmpfs          tmpfs          6.3G  8.5M  6.3G   1% /dev/shm
tmpfs          tmpfs          5.3M  4.1k  5.3M   1% /run/lock
tmpfs          tmpfs          6.3G     0  6.3G   0% /sys/fs/cgroup
/dev/loop1     squashfs        16M   16M     0 100% /snap/ponysay/73
/dev/loop2     squashfs        16M   16M     0 100% /snap/ponysay/79
/dev/loop3     squashfs       103M  103M     0 100% /snap/core/10577
/dev/loop4     squashfs        59M   59M     0 100% /snap/core18/1932
/dev/loop0     squashfs        59M   59M     0 100% /snap/core18/1944
/dev/loop5     squashfs        65M   65M     0 100% /snap/core20/875
/dev/loop6     squashfs        65M   65M     0 100% /snap/core20/904
/dev/sdb       xfs             10T  5.4T  4.7T  54% /mnt/data-2
/dev/sdc       xfs             12T  6.7T  5.4T  56% /mnt/data-3
/dev/sde       xfs             14T  7.9T  6.2T  56% /mnt/data-1
/dev/sdd       xfs             12T  6.6T  5.5T  55% /mnt/data-4
tmpfs          tmpfs          1.3G     0  1.3G   0% /run/user/1000
/dev/loop8     squashfs       132k  132k     0 100% /snap/lolcat-c/1
1:2:3:4        fuse.mergerfs   48T   27T   22T  56% /data
/dev/loop9     squashfs       103M  103M     0 100% /snap/core/10583

Try to check your mount (if you use sudo to run docker-compose, then run following command with sudo too):

docker run -it --rm --entrypoint sh --mount type=bind,source=/data,destination=/app/config storjlabs/storagenode

when you see a prompt

echo "test" > /app/config/storage/test.txt
cat /app/config/storage/test.txt
rm /app/config/storage/test.txt

Please, post results

This is simple on mobile too - add a new line with three backticks ``` before the code block and new line with three backticks after the code block. I edited your post to show an example.

sudo docker run -it --rm --entrypoint sh --mount type=bind,source=/data,destination=/app/config storjlabs/storagenode
/app # echo "test" > /app/config/storage/test.txt
sh: can't create /app/config/storage/test.txt: nonexistent directory

edit: My data is at /data/Storj not /data, and when I fixed that it worked fine.

honestly I do not like it. However it’s your choice to accept a high disqualification risk.
And I’m afraid that SQLite do not like it too.

Please, check all databases with this guide:

FWIW, I was unable to use mergerFS for Storj in a quick test a while back, although I use it regularly for other bulk data.

It’s likely to do with the Sqlite issue that impacts other apps like Radarr/Sonarr. See this FAQ at the Servarr wiki:

If you are using mergerFS you need to remove direct_io as sqlite uses mmap which isn’t supported by direct_io as explained in the mergerFS docs here

Consider trying it with the dbs and orders directories to a dedicated “app data” location, or just use single disk filesystems for your Storj node.