Gentle nudge to StorJ or thier representaives - what is the plan please? No ingress for me as I have full node for days now where 50% is uncolllected garbage or trash.
Many similar threads never arrive at a resolution.
So pretty please, asking for help or updates here - what is the plan to resolve this?
The issue with the missing reports from US1 (June 6-8) and SLC (June 10) is still here, so likely no uncollected garbage, unless you also have errors in the logs related to a garbage collection filewalker.
Do you also have an issue with a discrepancy between the piechart on the dashboard and whatâs actually used on the disk? If answer is âyesâ, then please check:
based on a broken reporting from the US1 and SLC satellites? I think you may reconsider this.
nothing. Seriously. The reporting will be fixed eventually⌠but yes, the estimation is wrong. However, your node will be paid accordingly whatâs submitted to the satellites with signed (three parts - an uplink, the satellite and the node) orders (cryptographically confirmed).
Iâm sorry that the estimation is wrong now.
@Alexey I understand the wrong estimation - but its preventing the remaining 4TB of disk being used, and while you claim the payout will be correct, thats not really true as I am being paid on 50% of the allocated space. Thats the issue - I cannot take any new ingress, bandwidth has been just bytes for days.
I think this needs a fix as a priority - what would that fix be, issuing missing bloom filters?
This exact issue with the satellite reporting do not prevent your node to being used.
However, if your node also have an issue with a discrepancy on the piechart with the OS reporting, then I can help to fix this issue in your setup. Only the issue with the not updated databases could prevent your node to be selected if the node believe that it doesnât have a free space.
So, does your node have a free space in the allocation accordingly the dashboard and/or the OS?
Do you have errors related to any filewalker in your logs?
Do you have errors related to databases in your logs?
Does the garbage collector finished its work for all trusted satellites?
Did you remove data of the untrusted satellites?
It took so long but it finally started getting stable for my node. What happened:
The used space filewalker ran for all satellites and locally used space graph got fixed. âThe node reports 596GB (yes in SI) but the OS reports 614GBâ â this is still the case but looks like itâs due to filesystems overhead caused by clustered space allocation. It causes more space allocated on the disk than the actual files use. This is normal.
The node finally got a bloom filter from us1. I donât know why it took so long. A week at least. Consequently all trash collection completed for all satellites.
Trash collected a week ago started to get removed.
Iâll have to wait for another 5-6 days until all the trash finally gets removed.
It means half of my disk has been blocked by trash and couldnât be used for about 2-3 weeks.
If you monitor your logs and make sure all filewalkers run, in a few weeks itâll probably get back to normal.
My problematic node is now on day 5 of its piece.scan. Unsurprisingly, Saltlake is whatâs taking the longest (day 4).
I hope there wonât be a bloody upgrade to 1.106.xx in the next couple days that makes it restart before the scan is complete!!!
we have a small problem with a backup of US1 DB. The BF is running on a backup to do not affect the current customersâ behaviour. So itâs always would be a little bit way behind. But itâs safe in exchange.
I have a BF running for 8 days now. I figured I can eyeball the progress by going to the trash folder for that particular satellite. The trash folder for a satellite will have a folder with a date from which the BF was downloaded. Then I check the inside, where there are 920 folders currently, out of a maximum of 1024. So I can guess the progress to be at 92% roughly. Not an exact science, but an educated guess.
Regarding the update, on Windows I temp. stopped the storj update service and paused any Windows Update for the next 7 days. Should assure me that BF will finish in about 24 hours, running 9,5 days in total.
Maybe this information might help you.
EDIT: I disabled lazy before the start. The slow performance is - I think - due to all these 4KB files.
The Avg⌠is way off, and itâs a known issue, see a discussion above. Itâs not a problem on your node.
However, if you also have a discrepancy between the OS and a piechart on your dashboard, you may check out this thread:
Might be something else there, like something in Recycle Bin, or swap file, or hidden files, etc.
Also pragma check and vacuum databases after stopping the node.
Also try moving databases on another drive and let the startup piece scan filewalker to finish the run.
If all these are checked, I canât imagine what could eat up the space other than a malware.