maybe? very doubt, for honestly…
hm, I believe you have ways to improve…
And Storj has no incentive to change that as it costs them nothing to use our storage for them.
this is not the reason. The reason is not enough resources to cover everything… sorry!
It will be fixed eventually. But no ETA.
P.S. And the Community is not willing to help, except posting known issues…
You still can’t predict future customer behaviour from the past.
I highly doubt that. You didn’t show the output of du or df. And if I see it correctly you started that node just a few days ago. You won’t get rich quick anyway.
True. But it gives me at least something to take into account compared to nothing.
I think that’s still better.
39 posts were split to a new topic: The trash is unpaid?
My du and df will pretty much match these figures:
I started the node 10 days ago and assessing the system and its potential. Based on my assessment I will never get rich from this. So my feedback is about improving the system overall, not to get rich from it quickly.
So would be last month’s payout.
Additionally to that.
I’m with Alexey. I get a payout the first week of every month: spend 10 seconds deciding “Do I want to keep running Storj or not?”. And if the answer is “yes”… I ignore my setup for another month unless I get a “Your Node had Gone Offline” email.
I can’t imagine the node UI showing me anything more meaningful than those monthly coins hitting my wallet - as far as continuing with Storj is concerned.
That’s great because you are not forced to make use of that information.
And I don’t want to convince you. I use it a lot and it is very useful for me.
So any way to troubleshoot this? Other satellites went through this phase fine but not us1
. In the spirit of “improve it yourself” can I trigger that process? Or is it under satellite’s control and when will it trigger it?
GC needs a bloom filter in order to work with. It’s a list of what files to keep, those that don’t match it are considered trash and moved to trash.
Since the satellite isn’t giving you a bloom filter to work with, then there is nothing to trigger. It can’t guess what files to keep on its own.
Thank you for the clue, I somehow thought bloom filters are used at a later retain
phase.
So looks like this assessment was even too optimistic:
Once the file is deleted on the satellite and no longer is paid for you wait for the satellite to send you the bloom filter. It can take a few days, a week or who know how long, there is no certain SLA about it, just wait, someday you will get it. Then if you’re affected by Bloom filter (strange behavior) – and as far as I can see (Bloom filter (strange behavior) - #65 by pdeline06) the latest stable v1.104.5
is affected – it might not work arbitrarily long time. So you can easily end up storing tons of trash for free for a month or even longer.
I hope you know, that Windows calculates in binary measure units (base 2) but shows a wrong unit for that. So, 571 “Windows GB” is actually 571 GiB (base 2), or 613.4 GB (SI measure units, base 10). Our software uses SI measure units (base 10) on the dashboard.
However, this discrepancy is related to not updated databases on your node.
To update them you need to enable a used-space-filewalker
if you disabled it (it’s enabled by default), save the config and restart the node. When all filewalkers would finish their scans for each trusted satellite, they would update the database. However, you need to make sure, that your databases are not corrupted and you do not have errors related to databases in your logs.
these bugs would be fixed too, just not so fast as we may want (I’m a SNO too).
I indeed know that. So to summarize, here’s the issue. On my node all satellites except us1
collected their trash. For us1
used-space-filewalker
worked fine but gc-filewalker
hasn’t run for at least a week and my understanding is it’s due to the satellite not sending the bloom filter. The logs are clean, no filewalkers related errors.
And according to this
162.10 GB
of Uncollected Garbage is due to that.
The calculator uses the same databases as a dashboard, what’s you see there is a missing reports for some days from the US1 and SLC satellites, it’s a separate issue from discrepancy between the piechart and the actual used space, reported by the OS.
So, there are three points of view:
Reasons for discrepancies are different. In this topic we discussing the discrepancy between the satellite’s point of view and the storagenode’s point of view. And it’s known - the missing reports on some days. Our devs would backfill it.
The second discrepancy (between the OS and the piechart) can be fixed by you - enable scan on startup and restart the node. Make sure that databases are not corrupted and you do not have errors related to the databases in your logs.
I understand all of that. Yes there are multiple issues all interconnected. The topic I originally posted got merged into this thread so some things we discuss become off topic.
596GB
(yes in SI) but the OS reports 614GB
(again in SI via df -H
: D:\ 641G 614G 27G 96% /mnt/d
, the same as in the Windows screenshot above).us1
I reported above.Hi,
Hosting a node for two months now. I’m receiving alot of test data, like expected according the post about upping test data.
Now since the 8th of may I already got above 1.5tb stored. As what the Total Disk Space circle diagram says. The storage never went lower. But according the “Average Disk Space Usage” graph it is keeping it on 400gb.
I now have arround 2.9tb. It has rissen some more in storage. But the disk report still deviates 130% from the Disk average.
Are there any known problems/bugs?
I’m running v1.105.4