2 posts were merged into an existing topic: Why there is no ETA for fixing issues?
The satellite has recognized the used storage. What is happening is that the estimation doesnāt factor in the TTL. So if you start the month with 4TB used space the estimation will assume your node will keep that size over the entire month. If it drops to lets say 1TB somewhere in the beginning of the month the estimation will slowly adjust to that. The estimation isnāt a clever function that would adjust to these fluctuations. It is a very simple one that just takes the current TBh and multiplies that with the time remaining in the months. So it will trend towards the actual payout. Especially in times with high fluctations it will be less accurate than on a month with constant 4 TB used space and no TTLs.
Just for safety and because it was specifically requested to not compromise a node. Storj-up is amazing for running and debugging a small little test network with just 10 storage nodes and all services we have. I am happy to help with the setup. Ofc it would also be an option to setup a new node in production that can get corrupted and start debugging with that one. There are a few options. Anyway storj-up is just awesome and I can recommend to use it in order to understand how the system works.
Yes, I know that but I think that is not related to the linked post, where the problem is that used storage is not being deleted nor paid, so used and not paid.
No ETA. This is an unknown bug and I donāt think someone is working on a fix yet.
But this is the problem with coding. You can never give an eta, if the source of the bug is unknown. You can only wait and hope or help find the source of the problem.
And TTL data is a new feature, so bugs are expected and have to be solved one by one. The only way is to report bugs, keep calm and wait.
But this would interest me too, would storj compensate or give rewards for bug fixes? (Big or small ones?) I just know about the bug bounty program
I canāt remember when test data switched to TTL (beginning of May)? I wonder if deleting all SLC .sj1 files with creation dates more than 30 days in the pastā¦ but after May 1stā¦ would approximate a workaround? Or, today, at least anything created in June?
There shouldnāt be any audit or repair requests coming in for that data: that the satellite believes is already gone.
No. SLC is also holding some files that donāt have a TTL. Maybe not a lot but if you just delete everything that is older you will delete these files for sure. So at best we will end up with lost segments but your node might not get disqualified for it. At worst your node will get disqualified for it. Both outcomes are not great so lets not try this out.
Instead a better workaround might be to have at least 2 nodes and cycle through them. So first month one node takes all the uploads while the other one remains idle. Second month switch the roles. What this does is that at the end of the month the node should get a smaller bloom filter that should delete all the garbage. This approach with 2 nodes will bypass the maximum bloom filter size.
In the long term it would still be better to find the root cause and fix it ofc. This workaround would need at least 30 days to take affect. We can as well spend these 30 days to find the root causeā¦
A post was merged into an existing topic: Why there is no ETA for fixing issues?
A post was merged into an existing topic: Why there is no ETA for fixing issues?
25 posts were split to a new topic: Why there is no ETA for fixing issues?
2 posts were merged into an existing topic: When will āUncollected Garbageā be deleted?
Hello @andwhatabout,
Welcome to the forum!
In July your node has been paid for the actually used space and bandwidth, not for the Average Disk Space Used This Month, which is used only for estimations.
However, your node may have a not deleted data yet (either in blobs waiting for the Bloom Filter and the Garbage Collector or in the trash). The data older than 7 days would be removed from the trash automatically, you do not need to do anything.
If the node calculated the average usage more correctly (ignoring missing or incomplete reports from the satellites), and the databases would have correctly calculated local usage, the estimation would be more accurate.
There are two ongoing issues:
- The Average disk space is incorrect Avg disk space used dropped with 60-70%
- The Disk usage could be incorrect for your setup (local issues): Disk usage discrepancy?
Both contributes to the incorrect estimations. However, your node is paid accordingly sent orders by your node, they accounts used space in GBh, not an average over the month, so they are pretty precise. Unfortunately due issues above SNOs now do not have a reliable method to verify their correctness.
The first issue can be fixed with this feature: Fix storage usage gaps displaying Ā· Issue #7009 Ā· storj/storj Ā· GitHub
The second issue can be fixed by SNO by either optimizing the storage or by enabling scan on startup if you disabled it (itās enabled by default) and disabling a lazy mode, then wait until the used-space-filewalker would calculate the used space and update databases. It should be successfully finished for all trusted satellites and your node should not have errors related to the databases and/or filewalkers. Then you would have at least correct numbers on the piechart.
The average could be still wrong, there is no workaround. So, when the satellites would be able to report the average used space, then the estimation would slowly adapt, however, it would still being an estimation (it assumes, that the usage would be the same to calculate the estimation).