After reboot PC node did not start. File is not a database

It’s not. Because your example is just one corrupted block of data, and I was talking about 0.1, 0.5 or 1.0% of the data. These are like apples and pears. Or to say it otherwise, if you lose today 0.1, 0.5 or 1.0% of data, chance you won’t find out in audit score even within a day {0.999,0.995,0.99}^2482={8,3E-4,1.5E-10}% and is about zero chance withing a week. But that one piece, might take forever… (if not deleted before audited).

Since STORJ is made fault-tolerant, just for the sake of data losses this is real nonsense. Like even using RAID>0’s, which isn’t anywhere stated as a obligation or even recommendation.

How to add an additional drive? - Storj Node Operator Docs

Whoever wrote the SNO-handbook, I don’t know. But the advices on Hardware Requirements - STORJ SNO Book are even contradicting the official advice not to use RAID5. So, if you want to RAID all the data, it’s fine by me. I choose to start some additional storage nodes over time.

My appliances are running apart from STORJ in their own VM’s, that haven’t suffered from data loss whatsoever. That’s the whole point. Besides, the database of Home Assistant is over 3GiB, and it writes about 10 GiB a day (increase TBW of the SSD, a little bit different from real written data). That’s really an awefull lot more than the database of STORJ which is contributing less than 1GiB a day to the TBW per node in my case (remember: database and data are on different disks in my case). Don’t worry for the data loss, it’s RAID1 and being backed up every day to the openmediavault server (also RAID1 and other drives).

So indeed, different write pressure. So it’s that unbelievable STORJ manages to fail on me, but Home Assistant for example isn’t.

It will never become zero. For example, the chance of both drives failing at the same day assuming a life time of 10 years. That would be 7.5E-6%, assuming these are independent variables. But in practice this is much higher, because these drives more than often are about the same age. And also external influences, like a fire, lightning strike, flooding, war, nuclear bomb, … total earth destruction whatsoever make them probably fail together. After all, a RAID isn’t a back-up. For that I’m using Syncthing, syncing my real important files to two other locations (family members living elsewhere). But even then, my chance of loosing the data isn’t 0%. It’s small, but never zero…

I already cited the sqlite ‘how to make it crack’-manual some posts before.
But not using docker is a real good point, especially since I’m running those storagenodes in separate VM’s anyway. Any manual lingering around on how to do this on Debian/Ubuntu(-derived) Linux? Can’t find it actually a recent one online, only this oldie.

Jup, as long as it only pertains to a hobby project, I’m really fine with it. And considering the whole topic, I’m increasingly convinced it’s a STORJ-issue. Also finding the Plex Media Server database being over 5GiB and Syncthing database over 3.5GiB (which turns out to be a level-db BTW), which are being rebooted / power cycled the same way as the storagenodes. Aside from the already mentioned Home Assistant with bigger file size and higher write pressure to the database.