Is there any information about how this can happen?
I think it’s just a side effect of smoothening. My guess is that the March 2nd is actually 0 and not below 0. That can happen if that day didn’t have any storage usage reports. It doesn’t matter, you’ll still be paid for everything. It’s a quirk of how the graph works.
Thanks makes sense to some degree at least, but how can my Average (of all things) can suddenly be zero when I host terabytes of data?
Data usage is calculated in batches on the satellites. They report back the timeframe and byte hours used within that timeframe. How often this runs depends on how busy the satellite is. This means it can sometimes be more than 24 hours. We’ve seen that happen before. The daily average is calculated by dividing the byte hours reported for each time frame submitted on that day, by the hours reported in that timeframe. This gives an accurate average of how much data was stored that day, but if there are no timeframes reported on a particular day, there is no data to display. 0 is perhaps not the best way to solve that, it would be better to have a gap in the graph, but that might be harder to implement. Regardless, the timeframe reported on the next day will include all hours missed on the days before, so even though it shows 0, all data storage IS actually accounted for. I hope that makes a little more sense… the workings are a little complex.
certainly seems a UI bug
Error does not appear on v1.73.4…
seems a bug on v1.72.5
That actually was kind of understandble. Got the jist of it. Thank you.
I’m indeed running 1.72.5. But it might also be an individual experience right?
On that version this graph was bugged to begin with. It showed numbers way too low. I can confirm that I see it on my nodes still on this version as well. Lets wait for 1.73.4 where this graph is fixed from prior issues. This visual issue might be related to that as well.
I can confirm, on 1.73.4 the disk usage graph looks good. It’s still doesn’t exactly match the local used space though, but it could be related to the garbage collector, which did not finish its job.
But on this node it is pretty close: