This is incorrect. Lets say the satellite is running one tally every 3 days. For a 1 TB node the result would be 3*24TBh. Payout will be correct and just the storage node dashboard has some problems to display it. The issue here is the difference between how the satellite calculates payouts and how the operator can read this data. The storage node has to apply some additional math on top to make it more human readable. And that additional math is the problem. All the storage node would have to do is filling the gaps in the graph. There is no gap in the data. It is just the additional math that would show 1 TB on the dashboard but only for a single day and not for the 3 days that the data was for.
what are you complaining about with your meager 6TB unpaid ?
August 2024 (Version: 14.0.0) [snapshot: 2024-08-05 08:47:13Z]
REPORTED BY TYPE METRIC PRICE DISK BANDWIDTH PAYOUT
Node Ingress Upload -not paid- 89.98 GB
Node Ingress Upload Repair -not paid- 2.38 GB
Node Egress Download $ 2.00 / TB (avg) 27.80 GB $ 0.06
Node Egress Download Repair $ 2.00 / TB (avg) 9.28 GB $ 0.02
Node Egress Download Audit $ 2.00 / TB (avg) 1.42 MB $ 0.00
Node Storage Disk Current Total -not paid- 21.47 TB
Node Storage â Blobs -not paid- 21.17 TB
Node Storage â Trash â -not paid- 301.75 GB
Node+Sat. Calc. Storage Uncollected Garbage ⤠-not paid- 15.97 TB
Node+Sat. Calc. Storage Total Unpaid Data <ââ -not paid- 16.27 TB
The script is comparing 2 numbers and non of them are from your file system. So if you want to have more unpaid garbage that is very easy to do. Just update the numbers in the sqliteDB like the filewalker would do. Give it an additional Exabyte of unpaid garbage. There you have it. Lets start a race and see who can get the highest unpaid garbage result
what do you mean "none of them are from your file system. "
I definitely have 22TB used on this filesystem.
it is time the GC is getting rid of all the data that resides on my systems which i do not get paid for.
it is taking too long, and systems do not take on new data due to it.
Delete the data that is no longer of any use, the sooner the better.
Maybe it is faster to do a GE, and make a new node than to wait for the GC to FINALLY work!
Should be easy to fix then. Why not doing it?
Are you asking me to fix it? I am not a good developer. I can provide links to the current math if you would like to take that challenge.
Edit: Beside that I wouldnât call this one easy to fix or do you have the correct math already in your mind? I donât. It is one thing to say âfill the gapsâ but another to explain how exactly. The more I think about the actual math the less it sounds like something I would call easy. Sure the moment we have the correct math figured out and captured in a ticket the implementation will be easy.
Sorry, I was thinking you are a software developer with storj⌠What is your position in the company?
Oh shit pants down? Short answer without having to call out my exact role would be QA engeneer. I am good in finding bugs and point out the root cause of it. Like in this example with a tally result every 3 days. That will make the dashboard print out gaps in the graph. Now if we could find a better math for this it would be worth a ticket and should get fixed in a short time. If we donât find a better math it might still be worth a ticket and hope that the developers come up with a better math. The difference between these 2 options is a bit of time for everyone to think about this math problem. I donât feel like I have spend enough time on it. Can we brainstorm something together?
Did you mean âOpen kimonoâ?
Ah nice I learned a new phrase today. Thank you for that
oh except we donât use that phrase any more, too racist.
âŚask the embarassed engineer who said it around the japanese-american female manager
Yeah, it looks like my node really has a lot of files that are not needed, but for some reason were nod deleted.
Received payout matches the âestimated payoutâ, which is based on 18TBm from saltlake. My node as (almost) full for the entire month, there should have been something like 35TBm. Which makes some sense - if my node was full, expired test data was not deleted, but the satellite though it was, so, over the month the stored data (as far as the satellite knows) went from 35TB or whatever down to almost 0TB, giving 18TBm average.
So, for some reason the node did not delete the expired pieces when it should have. Why garbage collection did not delete them either?
I have logs from my node, but they take up a lot of space (something like 50GB for July), if someone wants to study them I can upload them somewhere.
Have the payments already been completed? I have 4 nodes with a total of 28 TB of data and have not received any money yet this month.
Hi @Pentium100, you may want to following topic:
Hi @Frieseba, do you by any chance have opted in for zksync lite?
No, I did not opted in for zksync lite
There was big movement in the crypto market yesterday and gas prices went up quite a lot. Maybe has something to do with it.
There are multiple threads about this, so itâs easy to get confused which is the main one.
The Average Disk Space Used This Month is incorrect due to missing/incomplete reports from US1, and the Total Disk Space is taken from the databases too and can be incorrect too.
The first one can be solved by implementing a new feature on the node:
The second one can be solved on your node: Disk usage discrepancy?
However, while there is no completed reports from the US1 satellite, the estimation will be wrong.