I understand there is a 10% overhead required but this equates to a 50% overhead. To me that seems either garbage collection is partially working, or the node is ignoring the allocated storage quota and under-reporting the space used.
I can confirm I see piecestore deletion messages in the node logs.
No previous node, I’ve had the same ID since I got my alpha invite way back at the start. I moved all the data from one physical host to a new one with my same ID thirteen months ago. Back then I had the minimum storage allowance set but have since increased it to the 2TB set now.
This situation continues to worsen. I have restarted the storage node and I continue to consume more storage than what has been allocated. Now 1.75x the amount!!!
Available Used Egress Ingress
Bandwidth 23.9 TB 1.1 TB 339.5 GB 0.8 TB (since Mar 1)
Disk 5.9 GB 2.0 TB
Internal 127.0.0.1:7778
External <redacted>:28967
It’s really disappointing that outlier issues are simply brushed off like this without further investigation. At this current rate, my node will be full within 1 month through no fault of my own as my hard limits are ignored.
I advise lowering your limit to compensate for the data it’s unaware of. It’s not ideal but the alternative is starting over, which would be worse.
I guess it’s worth also taking @Alexey’s point that there is likely a DB corruption that has happened. Is there a procedure or instructions on how to export the DB records for valid blob’s/pieces and recover these into a new DB?
It sounds like I will always have this issue until the DB has been repaired and I can either;
a) start over,
b) lower the storage limit and get paid less despite consuming more storage, or
c) rescue the database somehow
Unfortunately we do not have a such procedure at the moment.
The idea is to remove dependency on databases during the improving of storagenode, but we does not there yet.