Avg disk space used dropped with 60-70%

My US1 reports have been down every day so far this month. All I’m getting are some incomplete reports that show about 1 megabyte or less.

Yes, and it’s likely will be like this over time.
So, we need to have a fix for the node’s dashboard (and, perhaps databases):

I understand the gaps between the reports. I don’t understand why every report I have received this month has been only about 1 megabyte or less for US1. Or did the customers delete everything I was storing for them? I doubt it. Am I the only one seeing this or are other people also getting strange reports for US1?

Im also experiencing these incorrect figures. Sometimes satellites send incomplete reports, which is what seems to be happening with US1 at the moment. If Im not mistaken this happens when a tally stops unexpectedly or maybe something else, but I doubt its indicative of mass deletion of data.

Because the report is incomplete. This satellite cannot finish the tally for 24h.

Near to zero average used disk space for US satellite for current month. Is that we need to wait for statistic update from US satellite?


US1 reporting is definitely down for last 10 days at least.
I have 132 GB on US1 on this node.

I finally received a US1 report that looks ok. Maybe somebody rebooted the report generator? If so, thank you.

After extensive examination of logs and performance reports, and much deliberation… they unplugged it and plugged it back in again.

5 Likes

marvinso1978, Is your US1 graph looking better now? My daily tally report is now up to 88% of my actual used space. Not bad.

But the reports from the Saltlake satellite is now missing.

I have three different nodes running on three different types of hardware all were originally full.

What I am seeing is that the “disk space used this month” has been increasing over the course of the month. 2 of the nodes have a massive discrepancy between used and average
disk space used but these two nodes have been full all the time. Yes some data gets deleted and goes through the trash process but not enough for the differences seen. We are talking a created than 50% discrepency.

The third node was showing the same symptoms and this node being local it was easy for me to add a bunch of extra disk space and its the one that is closest to matching actual disk space used.

All nodes are running V1.109.2.
The ages of the nodes are all different oldest one being from 2020 and newest one being 6 months old.
The physical disk space used matches the total disk space.
Re-started the nodes and can see filewalkers running through the disk space can also see bloomfilters being sent down and successfully completed.



1 Like

I am a little worried about the big boy 39TB node.
I’m assuming your “total disk space” on the dashboard roughly matches the disk space on the operating system? If NOT, make sure you can finished the used space filewalkers for every satellite. Could take days or even weeks if the disk performance is maxed out.

And then, you’ll want to make sure you get getting through garbage collection for each satellite. especially from saltlake these can be lengthy. look at your logs for “gc”.

also if the gc’s are still running right now then you may have garbage collection runs stacked up that still need to run. only 1 runs at a time now by default.

that’s like… the basic things to check for to ensure things are operating smoothly. As smoothly as possible.

But that big node might be larger than storj can really support. the website says a max of 24TB per node. when they get large, the bloom filters are not large enough for garbage collection to actually delete all the trash that needs to be trashed. So you end up with infamous “unpaid garbage” filling up your disk. the satellite isn’t paying you for it but it’s still in live data, not in trash folder.

1 Like

2 posts were merged into an existing topic: When will “Uncollected Garbage” be deleted?


This is not a problem by itself, because we have been told that the reporting is wrong and shouldn’t matter. But my node has been full for 2 months now. My payout is half of what I am expecting. This second screenshot shows that something is probably wrong.

3.41TBm for the 14th day out of 20tb, it should be closer to 10TBm
v1.109.2

1 Like

Check my post. Lol. It seems that this won’t be fixed.
The payout calculation is incorrect? I recorded some data by myself - Node Operators / troubleshooting - Storj Community Forum (official)

1 Like

How do you come to this conclusion after reading your linked thread?

The discrepancy has two reasons, the missing reports from the satellites in the left graph and the uncollected garbage in the right graph.

1 Like

this didn’t just happen yesterday, try searching on the forum and you will understand, if they wanted to fix it they would have fixed it already

1 Like

You need to check for the past periods, not the estimated payout for the current period. The estimation is based on unreliable reports from the satellites and unreliable “used space” from the databases (unless they match to the OS reports).

We want to fix, but while payout is correct, the nice graphs may wait.
However, we improving the code related to the garbage, trash and TTL collectors also implemented a badger cache to solve the problem with slow access to a metadata from the disk, which are more important than a nice Average graph or an incorrect estimation in the beginning of the month.

2 Likes