Sorry I hit create before I added some more information.
This is a node running under docker on macos with the underlying filesystem ZFS. this is the result of ls -l -rw------- 1 cap staff 2319872 Jun 28 17:35 storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/of/emv73t66njw4gjopjqmyaq4mibuvqed5gyw4rg5i4twvkhssfa.sj1
I have decided that this was the result of some interaction between an old macos docker, zfs and storj maybe influenced by 800,000 files in the trash directory or not.
Proximate cause was my attempt to put the database files in a separate directory for better performance. It worked for a while and then failed in different ways. When a put them back the problems ceased.
So nevermind.
I created a new filesystem. It’s trivial to do with ZFS and allows different caching and record sizes. I got bandwidth.db malformed errors and at one point a cash of the node with SIGBUS error. [signal SIGBUS: bus error code=0x2 addr=0x7f3ccbe2a000 pc=0xe33e69] and about 2600 lines of stack trace.
Thats when I gave up. No errors since then.
I still have the log file if you want some bedtime reading.
No. I edited the config file (storage2.database-dir) and moved the database to the new filesystem.
The 3+ terabytes of client data was untouched.
As I said it seemed to work but was unreliable for reasons unknown so I abandoned the effort and went back to the default.
This is very interesting case, because databases are unrelated to blobs, so you did something with the data location too, otherwise the problem with blobs (customers data) would not disappear.
Keep an eye on your logs, the problem with blobs never fixes itself, unless exactly this corrupted piece would be removed by the garbage collector.