I’ve been SNO for over a year now and when I saw hosted S3 Gateway offering I decided to transition my personal and professional projects to Storj network as much as possible.
First server I moved was my personal server hosting email(mailcow), nextcloud, and gitlab runners. Whole setup is just one big docker-compose
file with local volumes and is easy to start-stop and migrate to new server if needed. For this server, I am using restic
to create hourly and manual backups. These backups contain all docker volumes and other persistent data. Initially restic
could backup data successfully to Storj, but it could not prune old data due to a bug in Storj S3 gateway. Storj support was very responsive to diagnose the issue and after 2 weeks or so they rolled out a new version of the S3 gateway that resolved that issue (side-note: running restic prune
is much faster on Storj network than on minio
that I was running before). Below is the snippet of docker-compose
file that automates whole backup process if anyone is interested:
restic:
image: lobaro/restic-backup-docker:latest
restart: always
volumes:
- /opt:/data:ro
environment:
- RESTIC_REPOSITORY=s3:https://gateway.us1.storjshare.io/YOUR_BUCKET
- RESTIC_PASSWORD=PWD_TO_ENCRYPT
- AWS_ACCESS_KEY_ID=S3_ACCESS_KEY
- AWS_SECRET_ACCESS_KEY=S3_SECRET_KEY
- BACKUP_CRON=0 * * * *
- RESTIC_TAG=personal_server
- HOSTNAME=server.domain.com
- RESTIC_FORGET_ARGS=--keep-hourly 24 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 5
- RESTIC_JOB_ARGS=--host=server.domain.com --exclude=/data/gitlab-ci-cache/** --exclude=/data/docker/overlay2/** --exclude=/data/docker/containers/** --exclude=/data/docker/image/** --exclude=/data/docker/volumes/runner-*-cache-*
After having success with personal server, I decided to back-up one of my side-projects (mailsnag.com) to Storj network. One requirement that I had for this was to make backup as real-time as possible: in case of total server failure, I wanted to recover all data except last couple of minutes. I was able to achieve that through using wal-g
and minio mirror
. I am using wal-g
to make hourly incremental backups of postgres
database and also streaming WAL files directly to Storj S3. In case of total server failure, I will need to download last hourly backup and replay one hour worth of WAL files in order to recover the server. As for files, I have minio
mirroring all files to Storj S3 bucket. Again, whole setup is one docker-compose
file which makes it easy to migrate or restore server on new host. Below is the snippet of docker-compose
file that automates whole backup process if anyone is interested:
wal-g:
deploy:
resources:
limits:
cpus: "${DOCKER_WAL_G_CPUS:-0}"
memory: "${DOCKER_WAL_G_MEMORY:-0}"
env_file:
- ".env.wal-g"
image: "bitnami/wal-g:1"
user: root
command: "wal-receive"
restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
stop_grace_period: "30s"
depends_on:
- "postgres"
volumes:
- "postgres:/var/lib/postgresql/data:ro"
mc:
deploy:
resources:
limits:
cpus: "${DOCKER_MC_CPUS:-0}"
memory: "${DOCKER_MC_MEMORY:-0}"
env_file:
- ".env.mc"
image: "minio/mc:latest"
user: root
command: "mirror --remove --watch /storage ms3/mailsnag/storage"
restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
stop_grace_period: "30s"
volumes:
- "storage:/storage:ro"
and cron
entry for hourly backups:
0 * * * * cd /root/mailsnag/devops && /usr/local/bin/docker-compose -f ./docker-compose.yml exec -T wal-g wal-g backup-push /var/lib/postgresql/data > /var/log/wal-g.log 2>&1
If anyone is interested in setting up similar backup strategy on their servers, feel free to reach out and I will try to help.