Earnings calculator (Update 2024-04-14: v13.4.0 - Additional info on uncollected garbage and unpaid data - Detailed earnings info and health status of your node, including vetting progress)

Is there any date in the database when SN started operation? How long has SN been working for?

That’s not a bad idea to include. I don’t think there is a very reliable way to determine it and it differs per satellite. Maybe it can be derived from bandwidth rollups, but I’m not certain that data goes all the way back. I can look into it, but no promises. It has to be reliable otherwise it’s just showing misinformation.

Thanks for the tool!
Wanted to ask, can be makes sense in this a common string, to point fingers volume of only Egress ?
because for the node operator it is important in the main.

If I understand correctly, you’d want the total line to only count egress traffic. I think that would be confusing since all traffic is listed there. Furthermore repair and audit downloads are usually negligible, so you can just look at the download line to know how much egress there was.

That’s right… But!
As an option in my example…
40.49 GB(103.17)
or
103,17(40.49)
For me, the first option is preferable, since in the same line, the dollar amount takes into account it, it is useful traffic, not total traffic.
The total bandwidth of course should be specified, but it looks good in brackets (—)

There’s already a line displaying the Egress and the estimated payment… Duplicating that line at the bottom is highly unnecessary.

BTW, if you want a quick guesstimate of Egress payout, you can just multiply the web dashboard egress value by $20.00 USD per TB.

2 Likes

I had to compromise a bit on this one, since nodes didn’t keep old bandwidth data prior to some time in April. I added a note to mention dates before May 2019 may not be correct.

v8.1.0 - First contact

  • Added first contact date to display when the node first dealt with each satellite
  • Made use of additional space to display slightly more descriptive uptime and audit scores
  • Optional note when first contact date is prior to May 2019. As nodes didn’t keep bandwidth history prior to some time in April, dates before May are not reliable. This is only displayed when dates before May 2019 are found.
2 Likes

Just tried the updated script and got this error message

Traceback (most recent call last):
File “/usr/local/bin/storj_earnings.py”, line 125, in
for data in con.execute(query):
sqlite3.OperationalError: no such table: piece_space_used

Feel free to try my utilitarian BASH script posted below that pulls current usage stats from the API.

You’ll need bc , jq , cal , curl … but you won’t need to stop your node and make copies of the sqlite database files.

#!/bin/bash

# Check if there is an argument
if [ ! $1 ]; then
   echo "Usage: $0 [host]"
   exit 1
fi

# Begin
host=$1
days=$(cal $(date +"%m %Y") | awk 'NF {DAYS = $NF}; END {print DAYS}')
day=$(date +%d)
egress=$(curl -s "$host":14002/api/satellites | jq .data.egressSummary)
disk=$(curl -s "$host":14002/api/satellites | jq .data.storageSummary)
egress_pay=$(echo "scale=15; "$egress"/10^12*20" |bc)
disk_pay=$(echo "scale=15; "$disk"/10^12/(24*"$days")*1.5" |bc)
total=$(echo "scale=15; "$egress_pay"+"$disk_pay"" |bc)
per_hour=$(echo "scale=15; "$total"/("$day"*24)" |bc)
div4=$(echo "scale=15; "$total"/4" | bc)
div2=$(echo "scale=15; "$total"/2" | bc)
div3_4=$(echo "scale=15; "$total"*3/4" | bc)
times3_1=$(echo "scale=15; "$div4"*3" | bc)
times3_2=$(echo "scale=15; "$div2"*3" | bc)
times3_3=$(echo "scale=15; "$div3_4"*3" | bc)
times3_4=$(echo "scale=15; "$total"*3" | bc)
seconds=$(date +%s)

printf " Simple Storj Payout Estimate\n"
printf " -----------------------------\n"
printf " %s\n" "$(date -u)"
printf " UNIX Epoch: \t\t%d\n\n" "$seconds"
printf " Egress Estimate: \t\t$%.2f\n" "$egress_pay"
printf " Disk Usage Estimate: \t\t$%.2f\n" "$disk_pay"
printf " Earnings Per Hour: \t\t$%.3f\n\n" "$per_hour"
printf " Full Payout : $%.2f \t\tSurge x3 : $%.2f\n" "$total" "$times3_4"
printf " Months 1-3  : $%.2f \t\tSurge x3 : $%.2f\n" "$div4" "$times3_1"
printf " Months 4-6  : $%.2f \t\tSurge x3 : $%.2f\n" "$div2" "$times3_2"
printf " Months 7-9  : $%.2f \t\tSurge x3 : $%.2f\n" "$div3_4" "$times3_3"

Here’s the output for my node:

$ ./pay.sh localhost

 Simple Storj Payout Estimate
 -----------------------------
 Sat 21 Dec 2019 02:22:29 AM UTC
 UNIX Epoch: 		1576894949

 Egress Estimate:   		$8.51
 Disk Usage Estimate: 		$1.52
 Earnings Per Hour: 		$0.021

 Full Payout : $10.04   	Surge x3 : $30.11
 Months 1-3  : $2.51    	Surge x3 : $7.53
 Months 4-6  : $5.02    	Surge x3 : $15.06
 Months 7-9  : $7.53    	Surge x3 : $22.58

Watchout, n00b question: How to get bc / cal to work on a synology NAS?

@Alexey now that we’re back with a mostly up to date forum, it seems the state is from just before you converted this topic to a wiki. You mentioned I can do it myself, but I can’t see the option anywhere. I’m pretty sure the option is missing for me. Could you perhaps do it again?

Edit: Looks like I can do it for my current post but not for the top post. I think I can only convert my posts to wiki’s when they are less than a month old. Same as how long I can edit. I assume after it’s converted though, that time out no longer applies.

You don’t need to install anything extra on your NAS.

The BASH script connects to the SN API… So, all you need is a LAN connection to the host running the SN. My script isn’t pretty, but it works OK for my purposes, and doesn’t require that I stop the node.

I don’t want to clutter up this thread with my own stuff. I should have created a new thread.

For what it’s worth, I’ve been using the earnings calculator without stopping the node on synology on the live databases. It should be safe on Linux. Definitely don’t do this on Windows or MacOS docker installations though.
I theory it should be safe on windows GUI installations too, but I have not tested this. I’ve been overly careful with the warnings, but I think the issues are isolated to docker setups on OS’s that use virtualization for docker.

2 Likes

Done. It’s in the wrench icon below the post

1 Like

Thanks, I found it on a recent post, but the wrench icon isn’t there on older posts.

Anyway, I updated the top post with the latest info. Thanks for your help!

1 Like

Why I have different uptime check between the dashboard (95.5 - 97.2 - 97.8 - 96.4) and the earnings calculator (0.999 for all satellite)?

1 Like

Edit: let me actually add a little bit more to that. The web dashboard shows rather meaningless life time percentages. This is not what’s actually used to determine current reputation. The scores in this earnings calculator represent the current reputation scores for uptime and audits that are actually used by satellites in node selection and disqualification. They represent recent performance of the node rather than a life time performance.

3 Likes

Thank’s for the clarification, now it’s clear!

For Windows install users, I’ve created a powershell version of this script. I’m new to SNO and Powershell, so I hope it’s right…

$j = Invoke-WebRequest 'http://192.168.1.202:14002/api/satellites' | ConvertFrom-Json
$egress=$j.data.egressSummary
$disk=$j.data.storageSummary

$days=[datetime]::DaysInMonth((date).Year,(date).Month)
$day=(date).Day

$TB=[Math]::Pow(10,12)

$egress_pay=$egress/$TB*20
$disk_pay=$disk/$TB/(24*$days)*1.5

$total=$egress_pay+$disk_pay
$per_hour=$total/($day*24)
$div4=$total/4
$div2=$total/2
$div3_4=$total*3/4
$times3_1=$div4*3
$times3_2=$div2*3
$times3_3=$div3_4*3
$times3_4=$total*3

" Simple Storj Payout Estimate"
" -----------------------------"
" Egress Estimate: {0:C}" -f $egress_pay
" Disk Usage Estimate: {0:C}" -f $disk_pay
" Earnings Per Hour: {0:C}" -f $per_hour
" Full Payout : {0:C}     Surge x3 : {1:C}" -f $total,$times3_4
" Months 1-3  : {0:C}     Surge x3 : {1:C}" -f $div4,$times3_1
" Months 4-6  : {0:C}     Surge x3 : {1:C}" -f $div2,$times3_2
" Months 7-9  : {0:C}     Surge x3 : {1:C}" -f $div3_4,$times3_3

@BrightSilence it looks like the latest update (v0.31.9) broke the Current Data part of the script.

aelita@StorjShare-VM:~/stearn$ python earnings.py /opt/storj/data

January 2020 (Version: 8.1.0)                   [snapshot: 2020-01-30 04:59:48Z]
                        TYPE            DISK       BANDWIDTH            PAYOUT
Upload                  Ingress                    413.54 GB        -not paid-
Upload Repair           Ingress                     14.10 GB        -not paid-
Download                Egress                       2.90 TB         57.92 USD
Download Repair         Egress                      64.36 GB          0.64 USD
Download Audit          Egress                       4.97 MB          0.00 USD
Disk Current            Storage      0.00  B                        -not paid-
Disk Average Month      Storage      2.55 TBm                         3.83 USD
Disk Usage              Storage      1.90 PBh                       -not paid-
_______________________________________________________________________________+
Total                                2.55 TBm        3.39 TB         62.39 USD

Payout and escrow by satellite:
SATELLITE       FIRST CONTACT   TYPE      MONTH 1-3       MONTH 4-6       MONTH 7-9       MONTH 10+
us-central-1    2019-04-23*     Payout   0.1463 USD      0.2927 USD      0.4390 USD      0.5853 USD
Status:OK (Up.0/Aud.1000)       Escrow   0.4390 USD      0.2927 USD      0.1463 USD      0.0000 USD

europe-west-1   2019-06-01      Payout   0.4653 USD      0.9306 USD      1.3958 USD      1.8611 USD
Status:OK (Up.0/Aud.1000)       Escrow   1.3958 USD      0.9306 USD      0.4653 USD      0.0000 USD

asia-east-1     2019-06-06      Payout   0.2181 USD      0.4362 USD      0.6543 USD      0.8723 USD
Status:OK (Up.0/Aud.1000)       Escrow   0.6543 USD      0.4362 USD      0.2181 USD      0.0000 USD

stefan-benten   2019-04-22*     Payout  14.7686 USD     29.5371 USD     44.3057 USD     59.0743 USD
Status:OK (Up.0/Aud.1000)       Escrow  44.3057 USD     29.5371 USD     14.7686 USD      0.0000 USD

* First contact may be earlier, nodes didn't keep contact data before April 2019

It also appears that maybe the uptime score broke with the update as well. Just a heads up for ya.

1 Like