SQLite is very good engine, it already has mechanisms in place to deal with lost writes - I don’t think we need the SNO code to be bloated with “fix my node” routines for things that can be scripted - it puts blame to much on Storj, as then if anything went wrong we would all point finger at them, which not helpful.
There are things you can do to help - make sure you on correct kernel - 20.04.03 LTS HWE update will put you on 5.11 (there lots of disk related updates in here, but more aimed at USB3) although 5.4 is very stable and has critical issues backported. If you on pi3/pi4 then there are some quirks in 64bit kernel at moment, that have to be worked around - specifically around kernel memory allocation (it’s good enough for most things, but really suffer on heavy disk I/O, or complex data access patterns)
Write cache on linux is what the internet tells you to look at - there are many kernel parameters to control the disk subsystem - also remember that disks now lie, and can’t be relied upon to have 100% written your data to disk - just because you told a disk to write a block, doesn’t mean the controller hasn’t re-prioritised this, or that the write was executed but failed to update the platter - disks do have failures and without making the disk, write, sync, flush, read for every operation we can’t be certain.
If you have single disk, then maybe as other suggest write script to copy the databases every xx times a day - if you feeling fancy then there very good project to enable master/slave replication for SQLite to another host.
If you have redundant disks, see if your filesystem support metadata checksum ( this slow down IOPS but makes sure no bit rot)
Also look into Filesystems like ZFS (I know very little about this), or storage clusters like Gluster / Ceph / S2D
You should be very happy, 30 months with no errors is very good !