More robust databases

I’ve been using SQLite with HashBackup for 12 years, and it’s been super reliable. There have been 1 or 2 problems reported that ended up being hardware related, and I’ve seen a couple myself: a USB drive wrote 512 bytes of zeroes in the middle of a database, which had to be a hardware problem because the sector size is 4K; and I had a bit get flipped on a db record on a Macbook Pro. That was bad RAM.

It’s unfortunate that all computers don’t come standard with ECC RAM these days - another source of problems.

One other gotcha about SQLITE: if it has an active journal or wal, you cannot make a copy of the db file with OS commands. These two files go together and with OS commands you can’t atomically make a copy of both. You have to use the .backup command in the sqlite3 CLI or the SQLite backup API to get a good copy if there is an active journal or wal.

2 Likes

Thanks for taking this seriously! I’ll open an issue when I have some time - this weekend is pretty full for me. I’ll set a reminder so I won’t forget.

That kind of feels like an endless regression. Since it’s just stats anyway, I would say if the backup has issues too, might as well fall back on just creating a clean db from scratch at that point. You’d lose some stats, but at least the node would survive.

Opened an issue: Database files get corrupted too often · Issue #4213 · storj/storj · GitHub

2 Likes