Disk usage discrepancy?

Just add it. By typing it in.

2 Likes

Here are all the available flags as per the version given below.

1 Like

Hi Alexy,

I’m not reading the forum as much as I was used to. I’ve seen some new versions in my node since my post here. Seems like the problem still exists for me. Started the month with 2.7tb, now its on 4.5tb. While my node is all the time at 8tb. Sometimes some data is trashed or TTL expires, then it doesnt get lower then 6.5TB.

Are there already improvements for this issue? Or something I could do? Running v1.110.3 now

Ah. nvm seems to not be fixed reading previous posts. Payout is also not what is actually used as storage in the node. Last month was 4.55tbm for me. While I can remember the whole month didnt go lower than 6TB.
Even if it did go lower to 4TB. Than its still an average, so it needs to stay lower dan 4.55 to get there and I would have noticed it.

1 Like

Hi guys, I have a node which displays far less used disk space than the actual disk usage while checking with ncdu. The used disk space is just 4.05TB (out of 5.9TB allocated). But I check the node real usage with ncdu command and it shows the blobs directory with 10.6TiB. The disk is almost full. Why the heck is that? What should I do? Thank you!


So the usual workaround is to restart the node and give it a long time to finish a used space filewalker for each satellite. It could take days. (you can grep the logs for β€œused”)

much lesser probability, has your node been around for a long time? over a year? You could check if you have data from expired satellites, or maybe some junk in a β€˜temp’ folder which shouldn’t exist any more.

1 Like

Sorry to chime in here again but I’ve also been facing some issues that I didn’t realize.
Have this node running for years.
Allocated 25TB, running on v1.110.3 on a 5x 12 TB drive and 1TB SSD drive cache Synology DS1019+ with SHR 2 (36.6 TB allocatable)

I had a power outage as the power adapter broke so it was off a day, just replaced it and fired it up. Ran again automatically but what I saw is this:

So only 19.9 TB used but STORJ node thinks it is almost full.

Same through the console:
/dev/mapper/cachedev_0 37T 20T 17T 55% /volume1

And the earnings.py

Password: 
August 2024 (Version: 14.1.0)						[snapshot: 2024-08-21 19:48:08Z]
REPORTED BY	TYPE	  METRIC		PRICE			  DISK	BANDWIDTH	 PAYOUT
Node		Ingress	  Upload		-not paid-			  2.42 TB
Node		Ingress	  Upload Repair		-not paid-			 76.87 GB
Node		Egress	  Download		$  2.00 / TB (avg)		189.15 GB	$  0.38
Node		Egress	  Download Repair	$  2.00 / TB (avg)		485.23 GB	$  0.97
Node		Egress	  Download Audit	$  2.00 / TB (avg)		 82.96 MB	$  0.00
Node		Storage	  Disk Current Total	-not paid-	      24.55 TB
Node		Storage	             β”œ Blobs	-not paid-	      24.52 TB
Node		Storage	             β”” Trash  ┐	-not paid-	      21.78 GB
Node+Sat. Calc.	Storage	  Uncollected Garbage ─	-not paid-	      14.96 TB
Node+Sat. Calc.	Storage	  Total Unpaid Data <β”€β”˜	-not paid-	      14.98 TB
Satellite	Storage	  Disk Last Report	-not paid-	       9.57 TB
Satellite	Storage	  Disk Average So Far	-not paid-	       9.43 TB
Satellite	Storage	  Disk Usage Month	$  1.49 / TBm (avg)    6.79 TBm			$ 10.12
________________________________________________________________________________________________________+
Total								       6.79 TBm	  3.17 TB	$ 11.47
Estimated total by end of month					       9.43 TBm	  4.72 TB	$ 16.52

Payout and held amount by satellite:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ SATELLITE                      β”‚ HELD AMOUNT β”‚        REPUTATION        β”‚                       PAYOUT THIS MONTH                     β”‚
β”‚              Joined     Month  β”‚      Total  β”‚    Disq    Susp    Down  β”‚    Storage      Egress  Repair/Aud        Held      Payout  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ ap1.storj.io:7777 (WARN: Downtime high)      β”‚                          β”‚  $  1.49/TBm $  2.00/TB  $  2.00/TB        0%        100%   β”‚
β”‚              2019-12-22    57  β”‚   $   2.20  β”‚   0.00%   0.00%   5.90%  β”‚  $  0.2759   $  0.0123   $  0.0047  -$  0.0000   $  0.2929  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ eu1.storj.io:7777 (WARN: Downtime high)      β”‚                          β”‚  $  1.49/TBm $  2.00/TB  $  2.00/TB        0%        100%   β”‚
β”‚              2019-12-22    57  β”‚   $   1.49  β”‚   0.00%   0.00%   5.02%  β”‚  $  1.7622   $  0.0717   $  0.1399  -$  0.0000   $  1.9738  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ saltlake.tardigrade.io:7777 (WARN: Downtime high)                       β”‚  $  1.49/TBm $  2.00/TB  $  2.00/TB        0%        100%   β”‚
β”‚              2020-02-11    55  β”‚   $  12.43  β”‚   0.00%   0.00%   5.31%  β”‚  $  1.7274   $  0.0002   $  0.0066  -$  0.0000   $  1.7343  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚ us1.storj.io:7777 (WARN: Downtime high)      β”‚                          β”‚  $  1.49/TBm $  2.00/TB  $  2.00/TB        0%        100%   β”‚
β”‚              2019-12-22    57  β”‚   $   1.82  β”‚   0.00%   0.00%   6.19%  β”‚  $  6.3531   $  0.2942   $  0.8194  -$  0.0000   $  7.4666  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ +
β”‚ TOTAL                          β”‚   $  17.94  β”‚                          β”‚  $ 10.1186   $  0.3783   $  0.9706  -$  0.0000   $ 11.4675  β”‚
β”‚ ESTIMATED END OF MONTH TOTAL   β”‚   $  17.94  β”‚                          β”‚  $ 14.5145   $  0.5631   $  1.4449  -$  0.0000   $ 16.5225  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

So if you look at the trend from active insight it was full (I have about 5 TB own stuff so 30TB is full for the 25TB I alloacted) but now it is getting smaller and smaller but somehow the node thinks it is still full.

Current behaviour is that it downloads from time to time until β€˜full’ until the next GC.

It was offline for almost 2 days because of the power outage and I was on holidays.

What to do?!

I think the general vibe I’m getting for this whole thread is just let it cook. It should sort itself out, eventually. I suspect a larger node like that may take a little longer to catch up given it was offline for 2 days?

3 Likes

The average disk space used this month is still incorrect and it will be until the feature to ignore gaps would be implemented

Until then you may use only the latest fully reported used space from the satellites for comparison. You need to make sure, that your node is received them from all satellites and they are complete (they should not differ too much from a previous report and the next one and shouldn’t be zero).
You also need to have a correct numbers on the piechart on the dashboard, they should match with the reported used space by the OS (you need to use a SI measure units (base 10) to compare). If they do not match you need to enable the scan on startup if you disabled it (it’s enabled by default), save the config and restart the node. After the used-space-filewalker would successfully calculate the used space for each trusted satellite and update the databases, the numbers on the piechart should be correct. This scan may take weeks (depends on the response time of the disk and its load).
You need also remove data of untrusted satellites: How To Forget Untrusted Satellites.
Then you may compare a difference between the last reported used space by the satellites and the reported used space by the node.

Hi Alexy,

Thanks for your reply. Just to check. I have already enabled te following:
pieces.enable-lazy-filewalker: true
storage2.piece-scan-on-startup: true

Is there maybe other stuff I could enable to improve on? Node itself seems to be operating fine, other then the out of sync stuff.

I will still compare the piechart with actual used space

these both values are true by default, so these parameters are redundant, you may comment out them in that case.

If you do have filewalker errors or the databases errors in your logs, then you need to disable the lazy mode (the first parameter should be false) and probably enable the badger cache, see Badger cache: are we ready?, save the config and restart the node. The first run would be still long, but every next would be faster, see

A post was merged into an existing topic: When will β€œUncollected Garbage” be deleted?

This seems to be a different problem. I would suggest to discuss in a different topic (Probably @Alexey can fork it from here).

In general, there can be multiple reason for not getting new data:

  1. old SN version (use official docker image + included updater)
  2. not enough free space on disk
  3. not enough free allocated space
  4. wrong free space is reported to satellite (make sure that the space usage walker executed. If you use the default settings, restart the node, as it is executed during startup)
  5. disqualified / suspended
1 Like

this is mean that you need to check, that the used-space-filewalker doesn’t have errors after restart (please check for error and filewalker) and finished for all trusted satellites, you may check the progress:

I have 10.6TiB in blobs folder. I tried to restart the node and let it run for a week, no help. :frowning:

Then please search for error and filewalker, error and database.
Either of them will lead to not updating the databases, thus usage on the dashboard in a piechart will be incorrect.

the problem is that I think the pie chart is correct, 10.6TiB is impossible for that node, there’s something in the blobs directory that should be deleted but still lay there. But I don’t know how to check it :frowning:

For how long are you running that node? Could it also be data from old decommissioned satellites?

more than a year, 13 14 months

Then you may have data from decommissioned satellites: