Realistic earnings estimator

Correct, but keep in mind this is very theoretical and in practice you’ll likely see it fluctuate over time. You may see significant drops before you even get to that point during bad times and go far beyond during good times. It’s just an indication of what can roughly be expected to be some soft limit. There is also likely to be a certain subset of data that will “never” be removed. Which might lead to endless growth but really slow.
I guess what I’m saying is don’t buy hundreds of terabytes worth of HDD’s expecting to fill it.

1 Like

Can I close this voting to release some votes to the participants? The wiki should be editable still.
Or maybe just move from the voting category?

  • Close this voting
  • Move out of the voting category
  • Leave as is

0 voters

Closing the voting would also prevent people from responding, so please don’t do that.

Right now the votes represent people voting for an official earnings estimator. People can still retract their votes if they want to vote on something else as well.

If no official version is being considered, then I guess it can be moved if we can get a definitive response on that.

1 Like

I do not expect the “official” earnings estimator any time soon. So, your version is only available version at the moment.

Hey @BrightSilence – love the estimator you’ve created and the consistently excellent replies you post across the forum. Thank you.

Wouldn’t the 40TB soft limit you’ve mentioned above be a 40TB per /24 limit and not necessarily a 40TB per node limit?

1 Like

Aww thanks! Yes that is correct. It’s a per /24 subnet soft limit. However, deletes have been really low recently, so the soft limit might go up. It’ll just take quite a while to get there.

1 Like

I’ve made a few adjustments to the metrics used in the estimator.

Recently we’ve seen more download happening. My nodes together had more than 15% of stored amount as download. Newer nodes may see even better result as the downloads seem to hit new pieces at a higher rate. Deletes have also nearly stopped in the second half of the month. I saw less than 1% of stored data be deleted this month. Ingress is unfortunately also still a bit lower than previous estimations. Because of this I changed these metrics.

Max. Ingress: 1TB -> 0.8TB
Delete precentage: 2.5% -> 1%
Egress percentage: 10% -> 15%
Repair percentage: 4% -> 5%

All in all, this means we make a quite bit more money per TB than previously predicted, but unfortunately it will also take a little longer to collect data. The soft cap has moved up by quite a bit all the way to 80TB, but don’t go out buying HDD’s just yet. At the current rate, you will have only 55TB filled after 10 years. So start with sharing what you have and only buy more HDD’s as needed.

As always, this represents the network as it is now. Things may change drastically in the future!

For new node operators, you can still fill up 8TB in about a year and you’d make about $40 per month when that is filled. That’s still a great deal.

For everyone. As long as that is still a great deal, market effects will mean it attracts new SNOs and ingress is likely to drop further as a result of it being split among more SNOs. That’s the nature of the beast. Getting in early becomes more important as a result as that gets you in while the ingress is still good and gives you a head start on the rest. So if you’re reading this, nice job on starting early!


I sincerely apologize for the ever more obnoxious signals at the top of this calculator to please not request edit access. Despite my many attempts to make clear that I won’t grant these requests, my inbox keeps being flooded with frequent access requests. To everyone who has followed instructions and simply copied the calculator, I thank you! These messages aren’t for you.

You’d think this latest version would make it hard to miss. But despite this, I still receive requests. I may eventually move this sheet to a throwaway email address so I can simply ignore the emails. I already contacted Google Sheets support, but there is apparently no way to block the feature that allows people to request access. Please be kind to my mailbox and follow the instructions. :slight_smile:


If the emails have an exclusive subject or sender you can set up a rule to automatically trash the message. I’m not sure if this is something you’ve tried but I use it at work for system notifications that do not apply to my role directly


So that would be an rough average of $5/TB/month of storage assuming you gave it all of the egress bandwidth it could possibly want correct? Or I am digesting this piece of information incorrectly?

I am able to provision bandwidth stupidly cheap but not so much with storage… So I am trying to evaluate whether or not if I am in the right market to leverage this from my provider for the long haul.

You missed a main point:

Before that you likely receive a much less. I would like to suggest to use a spreadsheet from the topic and copy to your Google account, then place your numbers. You will have an estimation based on current average usage and all other limits (like vetting, building trust with a gradual decrease in the amount of withholding, etc.)

1 Like

Yes, what Alexey said. But also, that number was based on one of the better months. Use the calculator to get the best idea.


Awesome thanks very much for the clarification on such matters. :+1:

Alrighty so according to this it seems like you need roughly 8TB of storage per 1TB (0.6TB for egress and 0.4TB Repair) egress once storage does reaches actual full usage? I am reading those correctly?

Honestly I don’t get it what are you mean by that. And your spreadsheet is broken.
You need 8TB of storage for what? What is your profitability target? Have you accounted how much money you will pay to datacenter before your node probably filled up? Have you calculated how much time it may take?
Please, try to formulate it in a different words.

There is no obvious dependency between egress traffic and allocated space. Even proportion between an egress and repair traffic is questionable and could be taken only with some probability.

In general you should have more egress traffic with more space used, but this dependency is non-linear, and extending this estimation to the amount of allocated space have even greater estimation error.

It’s much simpler to talking about time. The 8TB could be filled roughly after a year based on current stat, then you may have a target income per month, if the usage would not change.


I starting to understand this at this point, thanks for explaining these latter points to me. :+1:

1 Like

If I may what are the good and the bad months? So I can take that into account with vetting nodes etc

Thank you.

Give me a moment while I fetch my crystal ball.

We’ve seen months that we’re half as profitable and up to twice as profitable as the average. But it’s impossible to predict and with the network still in it’s early stages, that average may still shift a lot as well. The estimator shows past behavior, but is no guarantee for future behavior.


A safe assumptions is always $1.5/TB stored :smiley: (since you get paid $1.5/TBm, so it would be storage with almost no egress).
The ingress however was rather variable, some “bad” month were just around 500GB/month ingress. So it takes a long time to fill 8TB.


@HisEvilness additionally, if your download bandwidth isn’t the best, you may actually loose the data you’ve stored if we go through a large delete spell. The faster nodes will win any new data more often, and if a large customer purges data (say an old archive), you may see your total used space decrease. There is a balancing act to it, where in a perfect (and simple world) it would look something like:

your_new_data = (your_download_bandwidth / global_network_download_bandwidth) * time
your_deleted_data = global_deletions * (your_previous_space / previous_global_storage_space)
used_space = your_previous_space + your_new_data - your_deleted_data

However, it isn’t purely distributed based on your percent of global bandwidth, latency also matters; additionally, statistics comes into play, further skewing towards faster nodes (especially once the network grows sufficiently large where the small and slow nodes are no longer required to meet demand).