wasn’t obvious to me since i actually changed the code of earnings.py before i figured it out… and sure in hindsight it makes perfect sense like i suggested above…
but that does assume prior knowledge.
i don’t understand what you mean by this, i mean i understand… but what are you talking about the status OK
because the problem was the the warnings removed the OK as shown in the screenshots above.
i don’t agree that it’s clearly obvious, without prior knowledge when it looks like this.
i wasn’t expecting my node to be done vetting yet on any satellites, maybe thats what got me and i will agree it’s very much an edge case… in many cases it would be fine because one will be able to see, unvetted nodes with errors.
so i guess this particular case is very rare… but still
i will however say, i can see how it’s difficult to make a better solution… no matter how i want to fix it i find issues with how its written…
can’t add the % in front of the warning because thats confusing. can’t/ don’t want to keep the % vetted because it takes up space.
think i would just do a minimal change and add the 100% vetted at the end of the warnings like it does when it’s not vetted… seems to be room enough for it and don’t think it will look to confusing.
and shouldn’t in any way be able to be confused with anything else, and it would be minimal code change basically just copy pasting already existing code.
Hello everyone I am unable to run this script on my debian box , I have a RAID 5 under mnt/md0/storage where all data from Storj reside but when I copy earnings.py inside that folder or inside the storage folder under /mnt/md0/storage and run it i get this message “storage]# sudo python earnings.py
Cannot establish connection to the host.”
What am I doing wrong can anyone help me ?
You need to specify the location of storage folder in command line as it stay in top of the post
I`am using like this : sudo python earnings.py /mnt/storj/hdd1/storage
If you run it with sudo I dont see a reason why it not working.With sudo it have the needed privileges to read that files
Small update to change to the new satellite names in the dashboard. I’m not sure if europe-north-1 and saltlake will change later as well. So another small update may follow if that happens.
When I run this (substituting my data path, which is /mnt/storj), I get:
`ERROR: bandwidth.db not found at: "/mnt/storj/storage" or "/mnt/storj".`
@BrightSilence, you pointed out this is a permissions problem; I did try running it with “sudo” and that was successful (and I appreciate the security caution you raised).
I’d like to figure out how to do this properly - that is, how to grant the appropriate permissions to my user account, so I don’t need to use sudo. But, I’m confused on a couple points.
The error message seems a little off, it seems like the script should be able to find the data, it just can’t access it. OK, so the error message is just not perfect, no big deal there.
But the file in question is already globally readable! It has permissions -rw-r–r–.
So, do I have to make every file in that directory readable by my linux user? Or are there a finite number of key files I need to chmod to make it work?
In my case, I have the original db files owned by my linux user while keeping the root group (root permissions bypass the write restrictions that would’ve been forced anyways while still keeping the assigned user intact). I don’t have all the current files owned by my linux user and the script still works (same permissions as yours). I’m on Kubuntu 18.04 (it’s still primarily headless for me though) so maybe it’s something like an SELinux issue? Personally, I’m tempted to just do chown myuser:root -R /opt/storj/data/storage (I might be fudging the backticks, but we’ll see once I post this reply) even though it works just so I never have to worry about it.
EDIT:
Upon further looking, that chown would be a bad idea… Maybe if I were doing ./.db (Discourse, quit using ASTERISKS FOR ITALICS YOU PIECE OF CRAP!) but otherwise, that’s a bad idea.
Ah, thank you. My two issues were related, turns out.
The script couldn’t find the file because it couldn’t access the directory it was in.
By running sudo chmod 711 /mnt/storj/storage I made the directory globally executable, which solved the problem!
Satisfying, thanks for the tips.
I’m sure there’s a more elegant way to do it - e.g., using permissions 701 and adding my linux user to the appropriate group. But for now, I’m happy with this (and I think my machine is secure enough that I’m not worried about anyone accessing it to be able to cause harm in this directory).
I have a small update today to implement a more realistic vetting progress measurement. Since the number of audits you get depends on how much data you have, this process is by definition non-linear. The first audit takes much longer than the next one and so on. This update accounts for that and instead of showing just how many audits you had, it now shows a calculated linear progress percentage (in addition to absolute audit numbers). Let me know if this is more or less accurate in your experience. It should be a lot more accurate, but I’d love to get some feedback as I don’t regularly go through the vetting process myself.
Tiny update today, not really a need to update if you don’t feel like it. I just tweaked some conditional statements to ensure lines that aren’t relevant don’t show up with a value of 0. In most cases this didn’t happen to begin with, but due to the use of floats, which are an inexact datatype, the values were slightly over 0. I’ve corrected for this by implementing a margin for the conditional statements.
Changelog
v10.2.1 - Fix: 0 lines showing for rounding errors
Implements a small fix for lines with 0 showing up in case of float rounding errors
What % audit disqualification (Disq column) as printed by this script is equal to my node being perrmanently disqualified?
This is output from a node I rescued (ddrescue) from a failing 1TB disk onto a replacement 6TB disk. In the process I lost around 15MB because of bad blocks, hence the failing audits. Downtime is around 3 days it took for the rescue operation.
I know some of you may look down on running this node because I lost any data. But bad blocks happen and the network is fault tolerant. If I fail enough audits the node is toast.
June 2021 (Version: 10.2.1) [snapshot: 2021-06-20 00:16:23Z]
TYPE PRICE DISK BANDWIDTH PAYOUT
Upload Ingress -not paid- 41.14 GB
Upload Repair Ingress -not paid- 65.97 GB
Download Egress $ 20.00 / TB 65.33 GB $ 1.31
Download Repair Egress $ 10.00 / TB 26.24 GB $ 0.26
Download Audit Egress $ 10.00 / TB 318.46 KB $ 0.00
Disk Current Storage -not paid- 990.13 GB
Disk Average Month Storage $ 1.50 / TBm 555.53 GBm $ 0.83
Disk Usage Storage -not paid- 399.98 TBh
________________________________________________________________________________________________________+
Total 555.53 GBm 198.68 GB $ 2.40
Estimated total by end of month 876.63 GBm 313.52 GB $ 3.79
Payout and held amount by satellite:
NODE AGE HELD AMOUNT REPUTATION PAYOUT THIS MONTH
SATELLITE Joined Month Perc Total Disq Susp Down Earned Held Payout
us1.storj.io | 2020-07-17 12 | 0% $ 0.37 | 0.00% 0.00% 7.10% | $ 0.1174 $ 0.0000 $ 0.1174
Status: WARNING: Downtime high
us2.storj.io | 2021-01-07 6 | 50% $ 0.00 | 0.00% 0.00% 8.97% | $ 0.0022 $ 0.0011 $ 0.0011
Status: WARNING: Downtime high
eu1.storj.io | 2020-07-17 12 | 0% $ 0.40 | 0.00% 0.00% 11.44% | $ 0.1821 $ 0.0000 $ 0.1821
Status: WARNING: Downtime high
ap1.storj.io | 2020-07-17 12 | 0% $ 0.31 | 10.18% 0.00% 8.84% | $ 0.0772 $ 0.0000 $ 0.0772
Status: WARNING: Audits failing
europe-north-1 | 2020-07-17 12 | 0% $ 1.49 | 11.28% 0.00% 5.36% | $ 0.3733 $ 0.0000 $ 0.3733
Status: WARNING: Audits failing
saltlake | 2020-07-17 12 | 0% $ 7.22 | 0.00% 0.00% 10.06% | $ 1.6503 $ 0.0000 $ 1.6503
Status: WARNING: Downtime high
_____________________________________________________________________________________________________________________+
TOTAL $ 9.79 $ 2.4024 $ 0.0011 $ 2.4013
100% is where you get disqualified.
And yeah, I agree with @kevink, your node should be fine if only 15mb was lost. Just keep an eye on the scores, but I think you’ll see them recover a little while your node gets more data. For now though, they may be quite erratic. That has to do with how the scoring is currently implemented. It’ll jump up and down and that doesn’t really mean your node is doing better or worse. But if data loss is below 10%, you should be fine. Long term you want it below 2% at least though, as changes to the scoring mechanism are planned that will become more strict. Those changes should also stabilize the score though.
My Storjnode run on a Synology in a docker container. When I run your script can I do this when my Node is online? Or is it better to shutdown copy the data and then run it?
Your first post says only do that on a Win or Mac.
@BrightSilence I seem to remember telling you I’d give you updated numbers in six weeks, about six weeks ago. But I can’t find the thread where that was. I’ve certainly started to fall substantially behind your earnings calculator (I’m about 11 weeks in and have accumulated only 240 GB of data, and only 3 satellites have vetted me - one only just yesterday) but I would guess that just has to do with reduced demand and/or a surge in SNOs. Anyway, here’s the data from your script, as of now:
The script still assumes constant ingress patterns, but unfortunately ingress has dropped quite a bit since June. So I think it’s doing exactly what it’s supposed to do. Thanks for reporting back though, this confirms my calculations are actually matching real world behavior (when corrected for changes in ingress). If only I had a crystal ball I would correct for future ingress patterns too. But for now this will have to do.