Avg disk space used dropped with 60-70%

Morning

Gentle nudge to StorJ or thier representaives - what is the plan please? No ingress for me as I have full node for days now where 50% is uncolllected garbage or trash.

Many similar threads never arrive at a resolution.

So pretty please, asking for help or updates here - what is the plan to resolve this?

CC

1 Like

The issue with the missing reports from US1 (June 6-8) and SLC (June 10) is still here, so likely no uncollected garbage, unless you also have errors in the logs related to a garbage collection filewalker.
Do you also have an issue with a discrepancy between the piechart on the dashboard and what’s actually used on the disk? If answer is “yes”, then please check:

@Alexey Volume usage appears to match dashboard, no errors in logs, filewalkers all complete with success.

However, 50% of my disk is garbage, and I have had no ingress for ages now.

What do you want me to do?

CC

ok then.

based on a broken reporting from the US1 and SLC satellites? I think you may reconsider this.

nothing. Seriously. The reporting will be fixed eventually… but yes, the estimation is wrong. However, your node will be paid accordingly what’s submitted to the satellites with signed (three parts - an uplink, the satellite and the node) orders (cryptographically confirmed).
I’m sorry that the estimation is wrong now.

@Alexey I understand the wrong estimation - but its preventing the remaining 4TB of disk being used, and while you claim the payout will be correct, thats not really true as I am being paid on 50% of the allocated space. Thats the issue - I cannot take any new ingress, bandwidth has been just bytes for days.

I think this needs a fix as a priority - what would that fix be, issuing missing bloom filters?

Thanks
CC

1 Like

This exact issue with the satellite reporting do not prevent your node to being used.

However, if your node also have an issue with a discrepancy on the piechart with the OS reporting, then I can help to fix this issue in your setup. Only the issue with the not updated databases could prevent your node to be selected if the node believe that it doesn’t have a free space.
So, does your node have a free space in the allocation accordingly the dashboard and/or the OS?
Do you have errors related to any filewalker in your logs?
Do you have errors related to databases in your logs?
Does the garbage collector finished its work for all trusted satellites?
Did you remove data of the untrusted satellites?

It took so long but it finally started getting stable for my node. What happened:

  1. The used space filewalker ran for all satellites and locally used space graph got fixed. “The node reports 596GB (yes in SI) but the OS reports 614GB” – this is still the case but looks like it’s due to filesystems overhead caused by clustered space allocation. It causes more space allocated on the disk than the actual files use. This is normal.
  2. The node finally got a bloom filter from us1. I don’t know why it took so long. A week at least. Consequently all trash collection completed for all satellites.
  3. Trash collected a week ago started to get removed.
  4. I’ll have to wait for another 5-6 days until all the trash finally gets removed.

It means half of my disk has been blocked by trash and couldn’t be used for about 2-3 weeks.

If you monitor your logs and make sure all filewalkers run, in a few weeks it’ll probably get back to normal.

3 Likes

My problematic node is now on day 5 of its piece.scan. Unsurprisingly, Saltlake is what’s taking the longest (day 4).
I hope there won’t be a bloody upgrade to 1.106.xx in the next couple days that makes it restart before the scan is complete!!!

this is expected

we have a small problem with a backup of US1 DB. The BF is running on a backup to do not affect the current customers’ behaviour. So it’s always would be a little bit way behind. But it’s safe in exchange.

hm, likely it will be upgraded… See https://version.storj.io, the cursor is coming…

Uuuuugh!
I just need it to be 3 or so more days! Can’t imagine the filewalker will take much longer…

I have a BF running for 8 days now. I figured I can eyeball the progress by going to the trash folder for that particular satellite. The trash folder for a satellite will have a folder with a date from which the BF was downloaded. Then I check the inside, where there are 920 folders currently, out of a maximum of 1024. So I can guess the progress to be at 92% roughly. Not an exact science, but an educated guess.

Regarding the update, on Windows I temp. stopped the storj update service and paused any Windows Update for the next 7 days. Should assure me that BF will finish in about 24 hours, running 9,5 days in total.

Maybe this information might help you.

EDIT: I disabled lazy before the start. The slow performance is - I think - due to all these 4KB files.

1 Like

It can, if it is - the lazy one. The name is for a reason…

Still not fixed yet? This will become a permanent issue?

My graph still has gaps, so seems they are not filled.
I do not think it would be a permanent issue:

1 Like

My Average disk space reached 24.06.24 over 4TB

but 3~4H disconnect network after Used Disk Space reset 1TB

I expose this happen 3~4 times

my disk already used Over 6TB but dashboard now show on 1.15 TB

is this no problem?

(I already hardware reset reason to this happening 3-4 times)

Hello @COSIA,
Welcome back!

The Avg… is way off, and it’s a known issue, see a discussion above. It’s not a problem on your node.
However, if you also have a discrepancy between the OS and a piechart on your dashboard, you may check out this thread:

Check the log file size. The dashboard dosen’t account for that. It can become quite large if you use log level info.

log file over 16GB. so I deleted it

Might be something else there, like something in Recycle Bin, or swap file, or hidden files, etc.
Also pragma check and vacuum databases after stopping the node.
Also try moving databases on another drive and let the startup piece scan filewalker to finish the run.
If all these are checked, I can’t imagine what could eat up the space other than a malware.