I saw the warning, that’s why I wrote here to ask others about the problem. It may or may not be accurate, but if the test data was uploaded with 30 day TTL then I think it is at least somewhat accurate.
My node got almost zero ingress in the last two weeks, because it was full. So, if the test data was uploaded with 30 day TTL, then about half of it should be expired by now.
However, the used disk space does not reflect that (the graph is of the actual used space, like what you get with df
):
From the peak of 48TB it’s now down to 44TB (and the node still thinks that it’s full because of the other bug that can be temporarily fixed by restarting the node and letting it run the filewalker).
So, I think that the “uncollected garbage” value is more likely to be correct than not, especially since it’s growing.
How do you get the value of “amount of data the satellite thinks my node has”? From the database of API? I could create a graph for it.