If the connection is lost, will the saved data be lost?

I don’t know what this conversation is about, so I’ll just skip it…

For Linux, bash, docker

First you need to create a ram disk

mount -o size=2G -t tmpfs none /mnt/ram-dbs

You may add it to the /etc/fstab as well
Then you need this bash script

#!/bin/env bash

rsync -Pa /mnt/storj/storagenode/databases/ /mnt/ram-dbs/
docker run -it ....\
...
--mount type=bind,source=/mnt/ram-dbs,destination=/app/DBs \
--name storagenode \
storjlabs/storagenode:latest \
--storage2.database-dir=/app/DBs

rsync -Pa /mnt/ram-dbs/ /mnt/storj/storagenode/databases/

Please note the absence of the -d option in the docker run command - it will be run in a foreground to allow to rsync databases after the container is terminated.

However, it will sync databases back to the disk only if you deleted the container.

come back…??!

The lost capacity was restored at a tremendous speed. I don’t know what it is, but I’m glad to be back

Either database is unlocked or the filewalker finished its work and updated databases.

1 Like

I don’t know what the Ether database is, but I’m glad it got the job done and didn’t seem to have any problems. :slight_smile:

This will work but it’s fragile and not portable.

Instead, I would leverage an existing supervisord configuration:

  1. Create another image, FROM the original storagenode image, and COPY the new custom pre-start and post-stop scripts, that would mount the tmpfs and copy-in the DBs, and copy-out the DBs and unmount tmpfs, correspondignly (copy paste from your comment, essentially)
  2. Modify program:storagenode.command in the storagenode section of supervisord.conf to run pre-start and then exec /app/storagenode. (or patch /app/storagenode itself – maybe even easier).
  3. For post-shut down stage, see eventlistener:processes-exit-eventlistener section in the config file; patch /bin/stop-supervisor and call post-stop there.
  4. modify storagenode config to override the databases’ path.

This approach will work correctly with docker stop, which unapologetically sends SIGTERM to the main process. It will also handle storagenode updates correctly: it will copy the databases back to disk when the stroagenode is killed, but the restarted storagenode will continue using databases

The disadvantage is you would need to rebuild the container when upstream image update, but this is not going to happen too often.

2 Likes

fixed the typo in the word “Either”.

1 Like

If the modification of the entry point script would be easy to add pre and post scripts - i would suggest to do so, but I do not see, how it could be done, because it’s used only to download binaries and either setup a node or configure and run supervisord.
Because if that would be possible, you may easy replace it with --mount without forking the whole things, like use a custom Dockerfile.

P.S. On the second though…
if you insert the sync to tmpfs

cp -v /app/dbs/*.db /app/ram-dbs/

before

and the reverse sync

cp -v /app/ram-dbs/*.db /app/dbs/

before

it may work. You need to

  1. create a separate folder for databases on your disk (e.g. /mnt/storj/storagenode/databases) and move databases to there
  2. Modify the entry point script as described above and place it somewhere (e.g. /mnt/storj/entrypoint)
  3. Modify your docker run command like this:
docker run -it -d ...\
...
--mount type=bind,source=/mnt/storj/entrypoint,destination=/entrypoint,readonly \
--mount type=bind,source=/mnt/storj/storagenode/databases,destination=/app/dbs \
--mount type=tmpfs,destination=/app/ram-dbs \
...
--name storagenode \
storjlabs/storagenode:latest \
--storage2.database-dir=/app/ram-dbs

However, this approach will not sync DBs back to the disk until you remove the container. So, in case of update they will remain in RAM.

1 Like