How I can do periodic database backups?

Sorry to revive this thread… The subject also caught my attention. I’m running Linux nodes and was wondering how I can do periodic database backups? Is it inside docker, or the identity path? As it’s sqlite, should I copy or dump to a new file for the sake of integrity?

A database backup won’t help you, because the backup will always be missing entries. and those missing entries could result in the node getting disqualified.

Yeah, but wouldn’t that be better than no db? I was thinking of replication using litereplica. Say for instance there’s a db crash, and a few entries are missed, how do you avoid disqualification? After seeing this thread I need to have a disaster recovery plan…

In the event of a corrupted database, you can follow instructions to fix it here but it would be preferable to avoid actions that could cause database corruption in the first place, such as abrupt shutdowns.

Could storj add soma backup function to database? onece a day ot twice a day?
will help agaist of lost all database, only 1 or 1/2 day data lost is not so big lost than all data lost, because of database.

If your node is missing any data then it will eventually be disqualified. If any data is lost, the satellite can no longer trust your node.

This is why database backups don’t really make sense. You’d need a backup after each data change to ensure that you always have the most recent data backed up, but without the possibility of the backup becoming corrupt. Streaming WAL logs to another system would work but sqlite does not support this feature (postgres does, mysql query-based replication could also work – but sqlite is the only database storagenode currently supports).

but i think it will allow make gracefull exit, not all data lost, and lost of all money?

This however is a more interesting question!

I guess you did not read my previous comment about how to repair the corrupted database.

It is on docker, but windows uplications hapenes also?

We doesn’t have such cases yet

So would this be possible? http://litereplica.io/sqlite-replication.html I’m not entirely sure where, or if it’s even accessible to change. I figure since the storage is already on RAID, why not have a backup solution for the database as well and eliminate the single point of failure. For example, some datacenter technician blips the blade, power goes out, SQLite gets partial writes on the DB and corrupts other entries. Then you should be able to just use the slave data instead, and have a feasible disaster recovery plan for the db.

1 case is here
OS windows 10

With a docker desktop

Storagenode enables the sqlite journal, so a power cut should not be sufficient to corrupt the database. (When the database is opened after a power cut, the journal would be replayed and the database restored to a consistent state.)

Of course, this requires that sync() actually push buffered writes to persistent storage. If there is a writeback cache anywhere between the application and the final persistent storage layer that cannot survive a power cut, then pulling the plug can corrupt the data. This is true no matter which database engine is used.

1 Like

Also, for Windows you should disable the write cache for the disk, if you do not have a managed backup power supply

The backup will not help, because it would be outdated in the moment of backup

1 Like

Backup will give you posibilitie of grcefull exit, and upload most data back, and not loose money for sudden breakdown and discvalification

It will not give you possibility to graceful exit. Probably your node will be disqualified already on that time. It’s not possible to apply a graceful exit on disqualified node.

why not give people to upload data back and exit, it much cheper than rebuild all his storege
even in cace of disqualified node