So just to be clear, are you saying it’s Storj’s position that, if you have a larger amount of data (so more frequent audits) and you have a temporary issue preventing blobs from being read (that can be easily rectified once identified with no data loss, like this: Disqualified. Could you please help to figure out why?) that operators should be disqualified and forced to start from the beginning if they want to continue with the project? If not, please help me understand what you’re trying to convey, because that’s how it’s coming across to me.
Is seeing everything return I/O or empty file errors when trying to read from inside the blobs directory not something that can be done when trying to serve data or respond to an audit? In other words if 100% of reads are failing why would the node not terminate? Why is one arbitrary file in the root of a large directory structure (18.5M files across 6,150 directories) the only safety net?