Realistic earnings estimator

So yes, you were sitting on a bunch of unused hard drives, or at least hard drive space.
I had a few TB free in my file server as well, but creating a node would mean attaching the storage using iSCSI and splitting the node into two servers (compute vs storage) which would not add to the reliability. And Storj said that the storage had to be fast (SSD cache) to have good success rate.

But in general, I think that you do have to buy at least something to run a node, be it some hard drives, UPS, Raspberry Pi or something else.

1 Like

that´s a reasonable goal for 1 node. How old is it to generate such revenue?

A little more than a year. But since there have been network wipes, last one prior to beta launch, the time it took to fill up to this amount was since the first beta. Not entirely sure when that was anymore. But I think it’s fair to say you can expect something like this by about the time your node gets out of the held amount period. Given you have enough HDD space and high enough bandwidth of course. And… historic performance is no guarantee for future performance, etc.

1 Like

tanks again,
how much space is enough, for a performance like this?

It never went over 11TB for me.

1 Like

Same here, it´s been going slowly up since the beginning. Currently at 84%, I expect it to continue to rise, but… is it ever going to reach 100%? how bad it would be if it got stuck at 95% or something like that on the long term?

not late at all to start again.

This number doesn’t matter. What matters is recent performance. A new down time tracking system is being discussed here. Blueprint: tracking downtime with audits

The lifetime percentages never really mattered and are just an indication. As long as the number isn’t going down, you’re doing fine.

1 Like

Hello, good morning.
My node is stuck at 96% at the moment, I think this month I don’t know if it’s going to increase more, because being 24 hours offline, I don’t think it will go up more. The maximum is five hours.
I’ll wait until next month.

@BrightSilence : you have my vote! I saw that stupid half storage rule and actually doubled it to get the right value in the “Storage (TB)” column (also bec I want my node to get full!). :+1:

@BrightSilence Thanks for this - I made a copy on my own account in May - let us know if you modify it so we can download any newer version.

Hi! Please, could you tell me if there are any reason to make a node more then 3-4 TB? In that topic I found that some people use 5+ TB storage. And my question is “why?”.

Just take a look at estimator: Storage Node Earnings Estimator.

Before starting my node I used that estimator. My parameters: unlimited bandwidth (set to 256 TB) with 800/800 Mbps speed. So last three parameters in estimator I set to max values. In that conditions I tried to change “Available storage” parameter. And found that there is only 7% difference (2 864 ÷ 2 670 ) between 20 and 3 TB storage.

So, my question: Is that true that there is no reason to have big storage (more then 3 TB) if you have good internet? Or it is not true?

Did you read the topic you’re responding to?

This entire topic is about how that earnings estimator is unrealistic and wrong and the top post links to an alternative that is much more accurate.

Yes there is absolutely a reason to have a node that is larger than 5TB. My node currently stores 13TB of data and as a direct result of that would make roughly 2.7x more money than a 5TB node would.

1 Like

Did you read the topic you’re responding to?

Yes. But the difference between official estimator and estimator from GoogleDocs file is so big! I’m totally confused. Why don’t they repair official estimator? It is a fraud. Isn’t it?

Well yes, that’s why I started this topic and created an alternative. I wouldn’t call it fraud, their estimator was built before there ever was a network. They had to make some educated guesses. And I don’t think they properly tested the “extremes” which are really not that extreme as 100mbit+ connections are actually very common. But they really need to change it now. If you agree you can vote for this idea at the top. That would give them a signal that this is a priority for a lot of people.

I wasn’t actually planning to modify it much, but today I ran into the fact that with my new total node size (28TB), it wouldn’t reach full potential within 2 years. So I made the following changes.

  1. Added information for full potential at the top, which displays estimated earnings once the node is full per month and per year.
  2. Removed all constants from formulas and made parameters for them so I could easily adjust them based on new learnings. These parameters are now named and listed on the right.
  3. Split the estimation by year and display the first 10 years.

Enjoy!

4 Likes

Please, someone with HDD mining experience check this GoogleDoc. I wonder if it is more profitable to do HDD mining with 28TB instead of using STORJ?

Today my node become 23 days old. And it seems that earnings estimator by BrightSilence works very well.

You be the judge. https://www.minesomeburst.com/mining-burstcoin-roi-calculator/

I was interested in that too but according to the calculators burstcoin is not even close to storj.
Of course it will take time until your node fills your 28TB so you may as well plot 20TB for burstcoin and only free more space once your node gets close.

I hope STORJ developers use to read this topic. Because I want to ask them.

There are a lot of documents that says "STORJ is decentralized storage … " and so on. In addition there are topics on this forum with questions like “what is the perfect size of a node?”. And if I remember correctly once I found answer by developer like this: “We need a lot of distributed nodes to make high reliability. We don’t need very huge nodes. That’s why earning will not grow proportionally with node size without limits. That’s why we calculate nodes in same /24 subnet as one node.” and so on.

But in practice we can see the opposite situation: you’ll earn more proportionally with bigger node. And it seems there are no limits at all. The only factor is filling speed (1TB per month).

And it seems now we have two problems:

  1. Official earnings estimator that doesn’t work (it seems to be a consequences of bigger problem).
  2. STORJ developers don’t understand what they want (the main problem?).

There are several option to solve these problems. Obvious option: admit that everything is totally vice versa (as more TB you have as more you earn without limits) and fix official earnings estimator. It’ll make me feel that you have a plan.

Both of these problems come from real statistic. As a result I have doubts about STORJ future. Something is totally wrong in STORJ right now. It seems there are problems with CEO or managers. They don’t have a strategy. I don’t see their plan. It stops me from investing in this project.

Please, explain me STORJ strategy. Not a fake one you use, but real. Why everything is vice versa? Why you don’t correct your strategy? And why you don’t fix earnings estimator (it is the last question in this story - because it is consequences, but it has to be asked).

1 Like

They’re at least aware of it. @John responded to an earlier version of this alternative estimator here. March payment drastically low? - #67 by john

I should not that this is a rough estimate and it fluctuates a LOT. I have about 14TB since the last wipe in late August last year. So it may be a little more per month on average.

It seems like this is mostly what you’re responding to. This is still the case. The storj network works best if there are a lot of independently managed nodes around the world. From the network perspective this is best. But a SNO won’t care about that and when having more HDD space can make you more money, then you make sure you have more space. This isn’t necessarily a problem for the Storj network though. Each segment is still distributed across many nodes and there are plenty of nodes for the network to be strong and stable even if there are a few bigger ones. Additionally 1TB per month ingress, if they will even keep that up later on, already limits the biggest types of setups. In datacenters it would be no problem to quickly spin up petabyte scale nodes, but those will never fill up. Furthermore, we have not really seen a big impact on deletes of files. And it may be a while until we get a good idea of what normal deletes look like. Imagine backup scenarios, maybe after a year or two, you don’t need your daily backups/snapshots anymore. Eventually there may be some equilibrium, since, the more data you store, the more data will be deleted per month. Lets say 2% of data stored is deleted every month on average. If you have 50TB stored, that means 1TB is deleted every month and the ingress of 1TB average disappears in the same month. Since I don’t have enough good data on this right now to build this in to the estimator, that effect is currently not included. I think at some point you’ll see a natural effect of this evening out a little more.

So in short, I don’t think the Storj strategy is broken in any substantial way. For many things we are still in the early days and really don’t know what steady state usage on the network will look like. The estimator Storjlabs originally built made a lot of assumptions about what that steady state may some day look like. And I think many of those assumptions are very flawed. But I don’t think they signify bad intent or a broken strategy. The network is flexible enough that it can adjust to many types of usage patterns and we all have to learn and adapt to those as things go forward.

1 Like