Earnings calculator (Update 2024-04-14: v13.4.0 - Additional info on uncollected garbage and unpaid data - Detailed earnings info and health status of your node, including vetting progress)

Well your shorthand to use last months end size doesn’t correct for the other differences. I’ve been thinking about adding more numbers to better show the differences, but I haven’t found the time to implement those changes recently. Regardless, there will be gaps I can’t show. Your node may have a discrepancy in calculating used space, leading to a mismatch of Total disk use calculated and actual disk use. The filesystem may cause larger usage on disk than the actual file size as well. Files that have been removed by customers but not yet cleaned up by garbage collection can’t be detected by the node. So there will always be gaps. But I’m hoping some additional stats may show where those gaps are unreasonably high.

I also saw some of the code I’m using has been deprecated, and throws a warning in later versions of python. I’m hoping to tackle that soon. Despite the warning, it should still work though. For now at least.

1 Like

Thanks for the explanation. That all makes sense. If you are able to tweak things to make that easier to see and analyze I won’t complain, but just having a general understanding of what is going on and the different factors that can impact that behind the scenes helps alleviate concern that things are working ok and that i don’t have a bigger issue building up where something in the system thinks i have way more data than i actually have. And of course, its always good to know im getting compensation for the storage im providing!

I have a young node and when running the script, it shows 0 Audits on all sats. Is it normal?

The first audit can take a while, though you also seem to have little data for a 2 week old node. Is it sharing ingress with other nodes? This will slow down vetting too.

1 Like

Nope, it’s just one node, no neighbours. I remember that they said they reduced the percent of data for unvetted nodes. Maybe this is the normal traffic now.

I see similar for a node added 2 days after… extremly low traffic, and no audits for vetting at all.
It would indicate the the adjustment made a few week ago was tightening it too much. At this rate it’ll never complete :wink:

April 2024 (Version: 13.1.0)                                            [snapshot: 2024-04-04 22:12:41Z]
                        TYPE            PRICE                        DISK       BANDWIDTH        PAYOUT
Upload                  Ingress         -not paid-                               29.64 GB
Upload Repair           Ingress         -not paid-                                0.00  B
Download                Egress          $  2.00 / TB (avg)                        4.50 GB       $  0.01
Download Repair         Egress          $  2.00 / TB (avg)                        2.32 MB       $  0.00
Download Audit          Egress          $  2.00 / TB (avg)                        0.00  B       $  0.00
Disk Current            Storage         -not paid-               89.20 GB
Disk Average So Far     Storage         -not paid-               46.01 GB
Disk Usage Month        Storage         $  1.49 / TBm (avg)       5.53 GBm                      $  0.01
________________________________________________________________________________________________________+
Total                                                             5.53 GBm       34.14 GB       $  0.02
Estimated total by end of month                                  46.01 GBm      260.94 GB       $  0.14

Payout and held amount by satellite:
┌────────────────────────────────┬─────────────┬──────────────────────────┬─────────────────────────────────────────────────────────────┐
│ SATELLITE                      │ HELD AMOUNT │        REPUTATION        │                       PAYOUT THIS MONTH                     │
│              Joined     Month  │      Total  │    Disq    Susp    Down  │    Storage      Egress  Repair/Aud        Held      Payout  │
├────────────────────────────────┼─────────────┼──────────────────────────┼─────────────────────────────────────────────────────────────┤
│ ap1.storj.io:7777 (0% Vetted > 0/100 Audits) │                          │  $  1.49/TBm $  2.00/TB  $  2.00/TB       75%         25%   │
│              2024-03-22     2  │   $   0.00  │   0.00%   0.00%   0.00%  │  $  0.0001   $  0.0002   $  0.0000  -$  0.0002   $  0.0001  │
├────────────────────────────────┼─────────────┼──────────────────────────┼─────────────────────────────────────────────────────────────┤
│ eu1.storj.io:7777 (0% Vetted > 0/100 Audits) │                          │  $  1.49/TBm $  2.00/TB  $  2.00/TB       75%         25%   │
│              2024-03-22     2  │   $   0.00  │   0.00%   0.00%   0.00%  │  $  0.0025   $  0.0004   $  0.0000  -$  0.0022   $  0.0007  │
├────────────────────────────────┼─────────────┼──────────────────────────┼─────────────────────────────────────────────────────────────┤
│ saltlake.tardigrade.io:7777 (0% Vetted > 0/100 Audits)                  │  $  1.49/TBm $  2.00/TB  $  2.00/TB       75%         25%   │
│              2024-03-22     2  │   $   0.00  │   0.00%   0.00%   0.00%  │  $  0.0000   $  0.0000   $  0.0000  -$  0.0000   $  0.0000  │
├────────────────────────────────┼─────────────┼──────────────────────────┼─────────────────────────────────────────────────────────────┤
│ us1.storj.io:7777 (0% Vetted > 0/100 Audits) │                          │  $  1.49/TBm $  2.00/TB  $  2.00/TB       75%         25%   │
│              2024-03-22     2  │   $   0.00  │   0.00%   0.00%   0.00%  │  $  0.0056   $  0.0084   $  0.0000  -$  0.0105   $  0.0035  │
├────────────────────────────────┼─────────────┼──────────────────────────┼─────────────────────────────────────────────────────────────┤ +
│ TOTAL                          │   $   0.00  │                          │  $  0.0082   $  0.0090   $  0.0000  -$  0.0129   $  0.0043  │
│ ESTIMATED END OF MONTH TOTAL   │   $   0.10  │                          │  $  0.0686   $  0.0689   $  0.0000  -$  0.1031   $  0.0344  │
└────────────────────────────────┴─────────────┴──────────────────────────┴─────────────────────────────────────────────────────────────┘

Edit: I just noticed the calculation in this version is wrong. Fix incoming now. Do not use v13.2.0.

As promised a new release today that adds more detail regarding trash. This new data can be used to more easily compare the satellite reported Disk Average So Far, which is used for payouts, against the Disk Current, calculated by the file walker locally. By separating Trash from Blobs, you can compare Blobs to the satellite reports to get an idea of how much data still hasn’t been cleaned up by garbage collection. Please keep in mind that Disk Average So Far refers to a monthly average, so on growing nodes, it is expected that this number is lower than Disk Current Blobs. You can refer to the ingress amounts to see if the difference is reasonable.

Changelog

v13.2.0 - Add trash details & replace deprecated utcnow() function

  • Adds additional details for Disk Current, splitting out Trash and Blobs, for easier comparison with satellite reported Disk Average So Far
  • Replaced the now deprecated function utcnow() with functions compatible with the latest python versions
3 Likes

Apologies for the previous bugged version. I should have tested more, but the numbers seemed to line up on the specific node I was testing with. I also made a small change in layout to make it look more clear.

Changelog

v13.2.1 - Fix wrong calculation for Trash

  • Fixes a wrong calculation for trash, introduced in v13.2.0
3 Likes


New script works perfect for me.

So when i look at my current standings, i see Disk Total is 16.72, with trash being 825G, blobs being 15.89TB, and Disk Avg being 10.89TB. If nothing were to change from now until the end of the month, should the disk avg be fairly close to the Blobs total? Or is being off a few TB possible and not concerning depending on some of the factors you mentioned before? And should i be moving this to a new post at this point to keep this thread clean?

I would say that difference is definitely abnormal, even in the current situation where we know garbage collection isn’t working ideally from the satellite end. I’m not seeing differences that large on my nodes. Though I do see up to about a TB or a little more. 5TB is extreme though. It’s worth checking your logs to see if garbage collection runs into any issues, but I recommend you move those questions to a more appropriate topic. There are already several active topics covering this issue.

2 Likes

I have this error on Qnap, linux

It seems like the deprecation of the utcnow() function has lead to a scenario where there is no way to support all python versions anymore. It now requires version 3.2 or higher. Can you check your python version on QNAP? It’s possible that your system has versions for python 2 and 3. You can also try calling the script using python3 instead of python at the start.

1 Like

I have to find an alternative until Qnap updates the app

That version should definitely work. I’m running it on 3.8 myself. It’s possible it’s running python 2 over ssh. Can you run python --version to see what version it is using?

You’re right, I installed 3 but the system still sees 2


)

Looks like python3 isn’t in your PATH. You might be able to fix that by running /etc/profile.d/python3.bash. But you’ll likely have to do that after every reboot.

1 Like

Hello. Thank for this script.

I just updated and noticed the following error…

Traceback (most recent call last):
  File "/Users/xxx/Desktop/storj/earnings.py", line 379, in <module>
    for data in con.execute(query):
                ^^^^^^^^^^^^^^^^^^
sqlite3.DatabaseError: database disk image is malformed

Try running the script after you stop the node, and do the pragma check for databases. See official docs about malformated databases.

1 Like

Unfortunately it seems one of your databases is corrupt. This may also impact your nodes performance. Make sure you don’t ever access the databases over a network connection like SMB NFS etc.

See this page for info on how to fix a corrupt database.

1 Like

Thanks and UHG…this looks time consuming.

Likely a result of recent power failures due to storms and UPS failure.