It requires multiple IP subnets. Multiple nodes alone won’t do it. Currently the largest nodes are around 19TB and earn about $55-$70 per month. It fluctuates a bit.
Earnings calculator (Update 2022-12-08: v12.3.0 - Now compatible with node version 1.62+ - Detailed earnings info and health status of your node, including vetting progress)
The node that survived deletion seems to be earning $7 a TB currently so there sure is variation.
@hoarder how to see on BlockChain?
3 posts were split to a new topic: Whats up with storj coin price action today. Whats the news?
Small update today to add the QA satellite. For more info on this satellite, check : Please join our public test network
v10.3.0 - QA satellite added
- QA satellite added
I wanted to publish this one last month, but there was a little confusion whether the bonus would be included in any of the node tables. That doesn’t seem to be the case, so unfortunately the calculator currently relies on a hard coded 10% bonus for payouts through zkSync starting October 2021. As always, latest version available at the github linked in the top post!
Here’s a sample of what that looks like (per satellite and totals):
Note: This system needs the receipt link to be available in the node databases in order to determine zkSync was used for payout, which at the moment requires a node restart for some reason (transaction link on the node dashboard is not available until restart). If it isn’t visible try restarting your node.
v10.4.0 - Add zkSync bonus
- zkSync bonus has been added for nodes using zkSync for payouts for October 2021 and later
With the new stefan-benten test satellite, I wanted to make a small change to include its name in the script so it won’t show up as unknown. Turns out, satellite addresses can now be pulled from satellites.db. So, I went down the rabbit hole to pull the names from there, found out they were going to be too long for my layout and decided to redesign the whole payouts table. Functionality wise it’s not the biggest update, but it required quite some rewriting of the code to make it all look a lot better readable. I hope you like it. Let me know what you think!
v11.0.0 - Redesign + dynamic satellite names
- Pull satellite name from database
- Redesign payouts table
- Note: Satellites.db is now required to run the script
Have some pretty new Node here. It shows 30% Vetting Status at 3/100 Audits.
Shouldnt it be 3%?
Nope, the percentage estimates how far along you are time wise. The number of audits increases exponentially over time. The percentage corrects for that. Feedback is welcome if you think the percentage is inaccurate though.
More info here:
I feel like the percentage is inaccurate but I will keep an eye on it. My node has been running since February 4th, and I am estimated to be 61% vetted on the eu1 satellite with 16/100 audits. I have only watched it for a couple of days but the audits seem to be around 1/day for this satellite currently. If the estimation was correct I would be done with vetting before the end of march, even though eu1 only had 9GB of ingress traffic so far (can I find out how much data a certain satellite stores?). US1 is less vetted although it had 18GB of ingress.
I appreciate the feedback, but I’ll need a bit more info in order to be able to make adjustments. I would indeed agree that vetting should be done by the end of March for that satellite, but I’m not seeing anything that contradicts that yet. The entire point is that the number of audits will go up over time. Of course this process is also dependent on the ingress for that satellite and customer behavior is never 100% predictable. If ingress slows down over time, so will the vetting speed increase. If ingress speeds up, the vetting speed will increase faster too.
So in order to see if there is anything wrong, I would need more snapshots over time with timestamps, how many audits and the percentage calculated. As well as the ingress graph for the corresponding satellite. (keep in mind that this one disappears at the end of the month, so it might be worth screenshotting at the end of this month.)
You can indeed see how much is stored by each satellite, on the dashboard by selecting the corresponding satellite from the drop down. You can see the TBh per satellite in the graph, you can divide the daily numbers by 24 to get an estimate, but due to how this is calculated the graph fluctuates a lot. You can also just look up the blobs folder for the satellite and see how much data is in there. Use this for reference which blobs folder is for which satellite: Satellite info (Address, ID, Blobs folder, Hex)
I will try and take screenshots of the ingress graph for us1 and eu1 satellites every day as well as the output of the earnings calculator vetting table and report back once I have a few weeks of data.
That would be awesome, thanks for your help on this!
Recently repair egress has started to count towards reputation as well. I think that means it would also count towards vetting, but I’m not entirely sure. New nodes shouldn’t see a lot of repair egress yet as all pieces are new and probably haven’t decayed a lot. But this will also make it a little less predictable. However, repair should roughly follow an exponential curve as well from the moment it kicks in. @littleskunk are you able to confirm whether repair egress counts towards vetting?
So it’s only been a week but so far my vetting seems to continue linearly rather than exponentially. Ingess traffic has grown a bit from 1GB/day to around 2GB/day, about 1/5 of my ingress traffic is quickly deleted again and moved into trash, and vetting continues at around 1/day for the two nodes with the most traffic, eu1 and us1. Concerning egress repair I don’t have any so far.
This is great info, thanks. It does seem that some adjustment may be warranted. It would be nice to see progress in about a week again. No need to post daily though.
What makes it slightly more difficult to determine this in a reasonably reliable way is that generally audits scale with data stored, but there is a system that prioritizes audits for new nodes. The smaller the share of data on a new node compared to total stored, the more linear the effect will be. And it seems to be slowly moving towards there. So the question is, what part of this process is now linear. It used to be almost negligible, but seems to be more significant now.
It seems vetting depends entirely on some different and external factor. I have had a lot of vetting for a few days and now next to no change since 7th of April.
It’s at the core still a random process, which is what makes it so hard to create good predictions. But I feel I now have enough data to show what I’ve been doing with it, though not enough yet to draw definitive conclusions. I’ll walk you through some of my early findings though.
I plotted your numbers on a time line based on weekly reports. Your previous post didn’t include timestamps for each image, so I just plotted the first and last one. More detail would likely just add more noise anyway.
Here’s what I found
Dashed lines are the predicted progress.
As you can see the actual number of audits does start to show the exponential growth I’m trying to compensate for, but it is less pronounced than I was expecting and less pronounced than it used to be as well. As a result you see the percentage display overcorrects for it.
Based on the numbers you provided I’ve been tuning the formula to better reflect the actual ratio between linearly generated audits per unvetted node and per segment audits which are influenced by amount of data stored. I came up with the following preliminary graph.
With these adjustments the lines look pretty linear, which is what is the intention. With the exception of ap1. But with that being a smaller satellite it is more susceptible to random fluctuations.
It’s not enough for a final update just yet, I fear I may have gone too far in the other direction now when the process moves further, so if you could report again about once a week that would be awesome. If anyone else wants to contribute roughly weekly progress updates on their vetting progress that would be greatly appreciated as well.
I will report back in another week. On another note though, my ingress traffic has been growing from 1GB/day to 2-3GB/day. Is vetting not a binary status anymore but a linear progress, or is this just random and attributed to growing network usage? I know repair traffic has been higher than usual due to Storj increasing the minimal amount of pieces that need to be healthy.
It’s that mostly.
Thanks again for your help!