Earnings calculator (Update 2024-04-14: v13.4.0 - Additional info on uncollected garbage and unpaid data - Detailed earnings info and health status of your node, including vetting progress)

Hello

Perhaps you have a bug in this version.

Traceback (most recent call last):
File “/<some_path_here>/storj/earnings/earnings.py”, line 410, in
disk_average_so_far.append((bh[-1]*3600.0)/seconds_bh_included[-1])
ZeroDivisionError: float division by zero

This happens on a new node (first month) with v12.3.0. With v12.0.0 there were no errors with this node.

There’s another new node, first month, which works fine with v12.3.0.

Should i send you some/all of the dbs from the failing node? (via Storj DCS it’s possible for me)

1 Like

Thanks for the report @Krawi . I can already see the issue. It may also happen during the start of the month on older nodes, but it’s related to the node being new. I’m gonna have to look into fixing this soon. I’ll get back to you.

It should also fix itself on the other node. This happens when one of the satellites hasn’t sent any storage reports to your node yet. This causes the division by 0.

1 Like

New version is up to resolve the division by 0 issue. (though it may have resolved itself on the old version for you personally already)
@Krawi: Could you try both versions now and report back? I want to make sure it actually fixes the underlying issue, so please test v12.3.0 before updating as well to see if the issue still occurs.

Changelog

v12.3.1 - Fix division by 0 for storage usage not available

  • Fixes an issue where the script caused a division by 0 error if any satellite hasn’t yet reported any storage usage within the given month

Since the missing satellites data is today present i used my snapshot from yesterdays db files for testing.

Good news, while v12.3.0 crashes, v12.3.1 works fine with the snapshot files.

1 Like

Awesome! Thanks for testing and having the great foresight to make snapshots!

@BrightSilence
Please mention that you must copy also “satellites.db”.
I copied all those files where earnings.py is, and it needed also “satellites.db”.
I ran the calculator on a PC, not related to nodes.

Done on the top post and github! Thanks for the heads up.

I wonder why the Vetting process is not displayed on the standard Dashboard that runs on 14002 port? It is the only info missing, that your py calculator displays. Whould be nice to see it there too, no need to copy files and run extra scripts.

The script used to have a lot more added utility before there was a web dashboard or before the payout info page was added. When they did add payout info, it looked very familiar.

I’m not sure why vetting isn’t mentioned anywhere still. What I can say is that I currently had to hard code the 100 audit requirement because it isn’t mentioned anywhere in the databases. The code to predict linear progress is also mine. But I’ve always encouraged Storj Labs to do what they can to make my script redundant, including copying ideas or logic I used. Who knows, maybe some day they will challenge me to think of even more stuff I could add to prevent my script from becoming redundant by adding vetting progress to the dashboard. It’s nice to keep each other on our toes like that, haha.

Also, unless you run a docker node on Windows or MacOS, copying the db files isn’t necessary. You can run on live DB’s on Linux or GUI nodes on windows.

I understand now. Sorry. I didn’t checked the github release dates. I thought this is more recent work, just some coding fun :smile:.
It seems that they did copy all the ideas.
I wanna make a sugestion: you could add an yearly estimator for 5 years, that adapts after the last 3 months or so.
I’m running only Docker nodes on Synology DSs. But I’m scared to run anything else on them, even a py script. Last night I copied the bandwith.db that is the biggest one, 40MB(?), and was corrupted. The old DS216+ bearly handles 2 nodes on 2 x 8TB Ironwolfs. I had to stop storagenode, and recopy the db. I thought it was also the original busted; but it’s fine. The py script showed the dashboard.
So your py script has a double utility: can also spot currupted/malformated db-es :smile:.

Btw, it deserves a mention that the main purpose has always been to check payouts and the calculator still serves an important purpose there that might not be immediately obvious, because it only shows up after all payouts are done and the payout info has been reported back to your node. And only then if you look at previous months. Allow me to demonstrate by showing you my results from October.

Where the web dashboard replaces the nodes own internal bookkeeping with the actual payout info. My calculator shows both and displays the difference. As you can see, Storj paid 47 cents less than the internal bookkeeping expected. Small differences are to be expected, but if this difference is larger, it can be used to notify Storj of an issue. It also displays the zkSync bonus that isn’t shown anywhere on the dashboard, which makes it easier to compare actual payout against expected payout for node operators who use zkSync.

So yeah, I still have some other extras to make this tool juicy. :slight_smile: But I like your suggestions and I’ll keep them in mind.

I disabled zkSync, so no bonus for me.
I tried to reinstall it in Metamask app, to reenable zkSync, on iPhone, but I couldn’t. It kept sending me to App Store.
The differences are awsome, didn’t spot them. I just ran it for current month. I manualy calculate them in my excel table, where I keep track of all the earnings, but this is way better and more precise.
More sugestions: could we display the data for more than 1 month? Like a few months, or for total life of node? I tried entering an interval but don’t I know how… python … path… 2022-09 2022-11 gives error.

This is the output in PowerShell for november:

A minor suggestion: enforce a minimum 1 space between the transaction value and the URL to separate the two:

Transaction($  3.00):https://zkscan.io/explorer/transactions/0x2f77....
                     ^ HERE

This will allow a smart terminal (iTerm in particular) detect the URL and make it clickable

It seems the easiest would be to move Transaction to the left to reduce indent by 1, given that the url has the fixed length and it is barely fits into the table:

@@ -466,7 +466,7 @@ lastl="└───────────────────────

 def tableLine(leftstr, rightstr, indent=True, empty=empty):
     if indent:
-        leftstr = "│    " + leftstr
+        leftstr = "│   " + leftstr
     else:
         leftstr = "│ " + leftstr
     if len(rightstr) > 0:

After that the URL becomes usable:

SQLite databases can’t be copied with cp commands while they are in use: sometimes it will work, sometimes the copy will be corrupted. This is because the journal may be “hot” and is not also instantaneously copied. This is why the node has to be shutdown before making the copy: it flushes the journal(s).

You can use the sqlite3 command to copy an active SQLite db and avoid a corrupted copy:

sqlite3 xyz.db ‘.backup xyzcopy.db’

You might be able to use this to copy all of the databases for the earnings calculator without shutting down the node. It won’t create corrupted copies, but there could be logical inconsistencies between the databases, depending on how the SN app manages the multiple db files. It shouldn’t happen, because if that were an issue, it could also potentially occur if a SN crashed. The key is whether database transactions across db files are committed as a group, using attach, or whether transactions are individually committed to each database file.

I appreciate this suggestion, but please don’t use this unless you run native or docker on Linux. And don’t use this over a network protocol like SMB or NFS. This command runs the same risk of database corruption as running the calculator on the databases directly on those systems. File locking over these protocols is not reliable enough for SQLites locking mechanism.

So since it causes the same issues on systems that require copies, unfortunately this is not a good alternative for anyone.

1 Like

great work with the script, thanks!

quick question, im running a docker so have copied the satellites.db, bandwidth.db, storage_usage.db, piece_spaced_used.db, reputation.db and heldamount.db to another location and ran and get the report.

do i have to copy these files from the node each time i want to run this script? or will the first lot of files i copied always show current data?

You’ll need to copy each time. Are you using windows or MacOS?

2 Likes

Hi, im running Unraid and using docker and pointing it to a drive in unassigned devices.

If I have to copy the files each time, il probably wont do this too often then

In that case you don’t have to copy at all. As long as you’re just running Docker on unRAID without additional virtualization inbetween, you can just run on the live db files without problems.

2 Likes

oh awesome! many thanks.