In the hopes that this is true, allow me to give a bit more detail on what I have been experiencing for months now (this locking issue was affecting me even when my node was only around 3TB, rather than the 12+TB it is at now).
Right now, piece_expiration.db seems to be roughly 12x or so the size of bandwidth.db. I think the difference was considerably less when my node was much smaller. That shifting size difference roughly corresponds to how often it would pick one database or the other to lock exclusively. RIght now, pretty much 11 out of 12 restarts, it’ll lock piece_expiration.db and ONLY piece_expiration.db, while access to bandwidth.db will be completely error free. But 1 in 12, it will ONLY lock bandwidth.db - I can’t really be sure if it’s doing it more or less frequently, but if there’s a difference it’s not obvious - while access to the much larger piece_expiration.db will be completely error free.
I suspect if I restarted the node a few hundred/thousand times, I’d see it affect the 3rd largest db, storage_usage.db (seem to be around 192k) while bandwidth.db and piece_expiration.db would be fine.
How that could be the fault of a disk drive just being slow in general, such that it is only slow on the 800mb database this run, and only slow on this other 60mb database on the next run, is… I can’t fathom any possible mechanism by which that could possibly happen.
It could in theory be an OS setting, sure, though I have made very very few modifications from the default. But I would consider that to be a software issue, not hardware. There’s no way that simply “ur disk is too slow” covers it. A faster disk for the databases may compensate for whatever is causing this bizarre behavior - but that would clearly be a bandaid, not a treatment for the underlying cause.