I want to take backups of my sqlite databases. I don’t want to shut down the storage node to take the backups but I’m having trouble.
Below is the command I’m trying. The storagenode is running inside a docker container on a host os of linux. I also tried taking a dump. Both result in an output file of 4kb even though the source db is >1GB. What is the proper way to to do this?
I also tried using sqlite3 from within the container but it seems it’s only available via python 2.7 and is inaccessible as a binary via root.
In case something happened to the databases. I have the DBs on a faster disk (NVME) while the actual data is on an array of protected disks. The only way I lose the data is if my house blows up, but the databases are on a cache drive that is unprotected from failure.
On another thread, someone confirmed that the databases do not affect payout. That said, I believe the databases keep track of the file locations and what not to deliver the data upon request. The 16TB of storage I serve may be rendered useless if the databases got corrupted or lost, not 100% sure though, just my assumption.
No. They are primarily used as a cache (for filewalker and collector) and as a source of information for the dashboard. The only database which could have some impact on how long your node could store expired pieces is to lost a piece_expiration.db - it contains information about the expiration date of specific pieces and is used to speed up the process of their timely removal, however, they will be collected by a garbage collector if the DB is empty.
But in general, sure, you may use a sqlite3 binary to backup your data too. You may run it in a separate container, since the backup operation should not lock it.