DB error after node restart

I’m setting up a new node on a cloud VM. Everything works fine after setup, but if for any reason the machine has to be restarted (updates, etc.), the node cannot return, showing error messages about the DB:

...
ERROR failure during run {"Process": "storagenode", "error": "Error opening database on storagenode: stat config/storage/temp: no such file or directory", "errorVerbose": "Error opening database on storagenode: stat config/storage/temp: no such file or directory\n\tmain.cmdRun:67\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\ tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC: 1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:267 "}
...

In fact the temp folder is no longer there after reboot.

Even though I clear the configurations and run the sudo docker run --rm -e SETUP="true" ... and docker run -d --restart ... commands again, I can’t repair the node. I have to redo the configuration from scratch .

What am I missing?

Hello @TowerBR,
Welcome back!

From the error I can assume that your disk is unmounted after the reboot, and you only have an empty mount point.
Make sure that you set a static mount for the disk in /etc/fstab: How do I setup static mount via /etc/fstab for Linux? - Storj Docs

Hello @Alexey ,

There is a “small” :grimacing: detail about the setup: I’m testing the use of an S3 storage mounted from the VM machine on this new node, using rclone. VM and storage are in the same datacenter.

rclone is configured via /etc/systemd/system/rclonemount.service, being reinitialized at boot. The /mnt/storj folder is mounted with the S3 storage.

What I don’t understand is that the /mnt/storj/storage/trash, /mnt/storj/storage/blobs folders and db files in the /mnt/storj/storage folder are there, only the temp folder is gone.

Network filesystems are not supported. It will not work and it’s expected.
Especially objects storages - they are not filesystems at all.
However, it may some kind work, if you configure your rclone mount with a full vfs cache. However, it will use your local storage for the cache, i.e. will use the same amount as the whole node. I do not think this is what you want. And you likely will hit some any other random issue, because it’s not a supported setup.
The only working storage network protocol is iSCSI.
You may try to use NFS, it may work if the server and the client are on the same host, but may be not too, it’s not supported anyway.

Yes, I knew that network filesystems are not supported, but I thought that by using rclone “in the middle of the stack” as a “filesystem”, it might be possible to find a workaround.

My goal was to have the flexibility to manage storage (size, location, etc.) without having to change the disk of the machine running the node.

For that usually people use LVM or ZFS. In your this exact case - iSCSI should work too, because you would provide a virtual disk on your storage server, so you can manage it there.

But if you want to use an S3 bucket as a storage (why not a Storj bucket by the way?), you may also try to use cuno. It’s a POSIX compliant, so it may work (never tested), or use an rclone mount --vfs-cache-mode=full command to mount it. The local cache will be on your VM with the node.

However, I do not believe that this weird combination would make any sense in a long term.

Thanks for the suggestions, I’m going to test these other options (LVM, ZFS, cuno), I’m even already testing the use of iSCSI.

Because I’m configuring this new cloud node to continue providing storage, not using it. If I used the Storj itself for this, it would be like a dog chasing its tail, wouldn’t it?

I do not know, the whole idea looks like a way off for me: you used a wrong tool to implement that…
Especially when you likely will pay more for the final solution, than get a payout…
Please check:

This is not supported, as @Alexey stated. But with very careful engineering it can be made to work. Look into my old posts for some details. In any case with rclone prepare to have a lot of RAM for vfs directory caches, or expect your rclone to be extra slow. Have fun!

2 Likes

This reminds me of Bitcoin mining with pencil and paper.

1 Like

I have some unused space available in cloud storage, so I thought it would be similar to using a space on my computer and making it available to a node. This, I believe, is the essence of Storj: making underutilized resources available as storage for other users. I just changed “local” to “cloud” (or at least I’m trying :laughing:). Let’s say I’m setting this up as a “technological curiosity”…