March payment drastically low?

More information about the current payouts can be found here :slight_smile:

3 Likes

Well if someone wants to kick me out of the forums they can, not really a issue for me. Of course all my posts have to be deleted.

Sure, but you have to bear in mind the X variable. So getting 0 * 20 / TB is still zero. What Im saying is that egress has always been ok.

I do not have the answer, and no one else for that matters. But my amateur view comes to conclude:

  • During beta and “stress-testing” I received a higher count of successful download request that I falsely used
  • The Calculator is inaccurate and should asap be removed, in EU it is even a violation to marketing good practice
  • I receive 99.999% upload requests
  • This could be that the network is being used mainly as a cold storage network - unbeneficial to SNO
  • It could also be Geo issues where upload and download request does not take geo into account equally,. i.e Lets store stuff on a node fair away to secure the network, but let all the download traffic come from nodes closer to the end users
1 Like

10/20 is? What? ???

Price… got it!
Thought you were referring to traffic.

Nobody wants that. I think you misunderstand me. I would want you to stick around. I would want you to criticize. That’s important and helps make things better. I was just asking for a change in tone. It was a personal request, I don’t speak for anyone else. But I appreciate this post and mostly agree with you.

This is due to several things. I believe you’re in western Europe as well, where most test traffic originated. We were just in luck there. Success rates have dropped due to quickly closed connections as well and because there is more over provisioning in transfers now.

I clearly agree as you can see from my long post up there. It can be fixed though. I hope they take over some of the ideas I used in my example.

As opposed to download? I’m not seeing the same thing as you can see in my earlier screenshot. Though most of that download is from the stefanbenten satellite.

Early use cases seem to have landed on that. Database backups have been mentioned often. I estimate about 10% is downloaded, based on February traffic in which there was far less test traffic.

I’m almost sure nodes are selected at random for download right now. Unless something has changed recently.

1 Like

@BrightSilence I appreciate the spreadsheet and the thought that went into your post. I posted in another thread about this month’s payouts but wanted to chime in on the estimator.

We originally started with a spreadsheet and we created the online version to allow prospective storage node operators a way to determine if it was worth the effort to operate a storage node. There are some assumptions behind the calculator about how much usage there might be on the network and what percentage of the user’s bandwidth and storage might be used in a particular month. Overall, the purpose of the calculator was to give a prospective SNO some idea of the range of possible outcomes given their equipment and available bandwidth.

There are a number of variables that determine how much a SNO can earn, but probably the most impactful is upload speed from the node, which translates into download traffic on the Storj network. While it’s possible to create some interesting results depending on how you slide the dials, in general, we created this calculator when we didn’t have any data on how the network will behave when in production. Well, here we are - in production.

I don’t know if we’ll update the SNO estimator or deprecate it in favor of content that clearly explains how SNO economics work. I’d be interested in feedback on whether it’s still useful. (You can check out the source spreadsheet if you’re interested.) The estimator is built on a set of assumptions that can only be evaluated as demand for storage on the network grows. As with any new technology, we’re seeing good traction with early adopters, but rapidly growing the network will take time and a lot of work.

Most SNOs are joining the network for the long term. We need early adopters who host a node and run it reliably for a long time. This gives us the capacity and time to grow the customer base and continue to develop new capabilities.

It’s impossible for us to know at this point the amount of storage and bandwidth customers will use in 1, 3 or 5 years from now, but having a reliable group of storage nodes will significantly increase the probability of success. We will be subsidizing the network in advance of customer growth so that SNOs aren’t bearing the entire cost of providing supply in anticipation of demand.

We’re seeing many types of backup use cases today from production partners and customers - DB backups, NAS backups, data center hybrid cloud backups - as well as a range of other use cases that tend to be lower bandwidth use cases. As the network grows, we’ll expect to see higher value use cases. In addition, we have a wide range of roadmap items under consideration that will be prioritized as we see demand for those use cases.

8 Likes

@John, thanks for that extensive response. I appreciate the transparency and sharing the original sheet as well. It allowed me to verify some assumptions I had made about the estimator.

But let me first respond to whether I think the estimator is still needed.
I would say it definitely is. You can explain the SNO economics into the minute detail, but without a realistic explanation of what kind of usage a SNO can expect, it’s impossible to make a determination on whether it is worth it for you. And I think unfortunately that’s exactly where the current estimator is lacking.

Let me take an example from your post.

I’m sure this isn’t intentionally, but download traffic on the network is actually capped by data stored on the node much more than it is by upload speed from the node. To the point where your remark is basically just wrong for almost anyone. In your other post (which I appreciated a lot btw) you mentioned that you plan on downloading test data at similar rate to which customer data is downloaded. Which is 10%. Luckily I calculated about the same percentage and used that in my version of the calculator. So we’re on the same page. My node has been around since the beginning and I’m now storing 7.5TB. I think we can assume this is close to the max per node. That translates to 750GB download per month. That can be done with a 2.3mbit line. Even with my assumption that only 1/3rd of bandwidth can be used anything over 7mbit would be limited by the amount stored and not the upload speed. And since most nodes won’t store/share that much and the minimum upload speed requirement to start a node is listed as 5mbit, it’s almost more fair to say it isn’t at all dependent on upload speed. Now this doesn’t take into account losing the race to other nodes, which at some point will kick in. But it wouldn’t surprise me if 10mbit upload is enough to manage pretty average performance on that as well and anything beyond that probably doesn’t help all that much.

As a result of this, the biggest determining factors for node profitability are in fact speed of data growth per node (with free space available) and rate of amount downloaded compared to stored. These aren’t based on node specs, but rather properties of the network. Neither of which are communicated on a regular basis or included in the calculator. This results in the forum being full of posts of SNOs saying: “I have 1gbps up and down and I’m not making anything!!!”. The wrong expectation is being set for these SNOs and community members are forced to tell them this isn’t mining, usage is based on customer demand. The first time they hear this shouldn’t be after they complained on a forum. You already have an angry, disappointed and annoyed SNO by then.

Now it’s not my intention to blindly bash the original estimator, because the following statement is very true.

Because of this the differences were understandable. But we know a little more now. You have much more data internally to determine those 2 key numbers I mentioned earlier and they can be updated from time to time based on new information. I even understand incorporating some future growth expectations in those. There is an argument to be made to be a little more optimistic to attract new SNOs as well as to correct for more business down the line.

There is one element left I want to specifically respond to.

I could not agree more. But as of now, nothing prepares SNOs for the less than $5 total earnings over the first 3 months no matter how great your hardware and connection is. This means you’re losing a lot of SNOs before they ever get to that long term earning potential. They expect about 25% of maximum earning potential looking at just the held back amount (if they are even aware of that part, because it’s currently not mentioned on a page you would come across during sign up).
I think it’s important to clearly show SNOs that it takes about half a year to start seeing some more serious earnings and the impact is much higher than just the held back amount. My version shows this clearly and encourages SNOs to stick around much more than the current estimator does. Chances are the SNO will just assume the current estimator was a lie during the first 6 months and stop running their node. I’ve already heard that response from a co-worker of mine who is running a node as well now. Just look at this example of an extremely high spec node.


That’s still $2.07 in the first 3 months.

I’ve seen several posts of people saying: “I have 300TB of space available to share”. This version of the calculator would show them that that would not make much of a difference. It also prevents people from buying large servers that they won’t earn back.

I just think you should own the fact that Storj isn’t meant to run on large server setups. Anyone can participate, but there is a limit to what you can earn. This helps the network stay decentralized. It’s a good thing for most SNOs. Embrace that and be upfront about what that means for larger setups.

And by all means, adjust the two key factors I mentioned before in the calculator if you have new information. With future expansion especially into CDN like use cases there may obviously be a big change in usage patterns. So make it easy to update those two numbers when it’s relevant.

Ok, I think that’s everything. Sorry about the rather long post, but I think setting the correct expectation is important to keep SNOs happy and make them stick around.

6 Likes

When I joined in January, I was surprised by the calculator stating that so much can be earned for data transfer. Only after a lot of digging I realized that full utilization of bandwidth is pretty much impossible. This information was not clearly stated on the calculator page.

IIRC, on the last town hall someone stated that the egress-to-storage ratio for the initial clients is estimated to be around 15%. As I understand it (correct me if I’m wrong), this means that for each terabyte of storage one can expect on average about 150 GB of egress monthly, or a total of 4.5 USD/terabyte stored (storage + bandwidth payments)—this seems to be a good estimate, as it matches my observations from February and March. This information alone would be extremely helpful in the calculator, and it was available at the time of the town hall, ie. end of January.

1 Like

the earnings estimator is a joke, it may have been a projected possible dream numbers, and sure if something like netflix or some such thing would run of storj then the upload would matter… but for cold storage the numbers are a bit weird… ofc now is a terrible time to change the estimator, because storj has basically just gone live…

now the real numbers will start to come in… so people can know if we are ended up as a “cold”/hot storage backup solution or something else… like a way to reliably transfer large datasets from 1 place to multiple places in the world…

i don’t really see storj as a viable cold storage platform, because for true cold storage then there are better and cheaper ways of doing that…

really the cool thing about storj is that the access is over vast areas and can basically max out the local bandwidth when accessing data.
the options in this and the value must be huge, like say any person that travels and needs reliable access to their data globally for cheaps.

ofc in price range its often different to compete with mass production, so really it comes down to this being a network of a more distributed nature and what advantages that can bring people that can utilize that… like say searching in large live data sets from multiple locations on the planet…

like when trains and cruise liners passengers access the internet across the world… they would require live access to vast entertainment datasets, which might be difficult for stuff like netflix to provide reliably in so areas of the world… tho i suppose they run over satellites… but essentially they could use more local solutions in some cases with smaller ships closer to land…

i’m sure there will be a huge market for this… when somebody figures out who the perfect customers are for this and i’m sure it will make boat loads of cash for all those involved…
new tech can take a long time to find its perfect niche.

i kinda agree with both @BrightSilence and @john tho i don’t think backup is what the storj network will end up serving, but personally it’s what i plan for… that way anything better than that is just pure profits, but with 400mbit/400mbit, 36tb storage and no bandwidth cap, then the estimator is already useless, i just expect for my node to fill and for me to be paid 1.5$ pr TB for a good while.

i don’t think SNO’s will be a super prevalent thing in the future, because tbh i don’t think local data storage makes much sense… its unreliable, poor in efficiency, often either mostly empty or full… it gets outdated, slow…

i think there will lots of smaller datacenter like SNO’s which has a certain efficiency and mass production like qualities, while still being distributed enough that it doesn’t create the disaster issues that come with the very focused massive datacenter approach.

anyways a few thoughts on the issue…
and i do hope we figure out the prefect client for the storj network sooner rather than later… being cold storage backup isn’t very lucrative xD

3 Likes

Terminology can be a little confusing, but Tardigrade is by definition a hot storage platform, because all data is readily available at high speeds. So I prefer to refer to how that hot storage is used instead of calling data itself hot or cold.
While is is true that we’re just starting to see customer behavior on the network, Storj does have insights into what the first use cases look like and that means the first predictions can be made based on that. For now it looks like any data that needs to be readily available at high speeds, but isn’t downloaded by a large audience at the same time is the sweet spot. So we’re talking about database backups, personal backups or storage. But not (yet hopefully) CDN style, upload once make available publicly types of scenarios.
Part of this is also embedded in the pricing. While us SNOs are looking to have as high egress as possible, customers want to do the opposite for the very same reason. It makes sense that the pricing model attracts the types of customers we’re seeing now.
Now Tardigrade could eventually be competitive for low volume CDN cases. At high volume I fear that the discounts offered by competitors make the Tardigrade pay as you go pricing not viable. Though obviously they could offer specific high volume deals as well. There is still work to be done for this use case though. So I don’t expect we’ll be seeing that very soon.

The entire network is built specifically to deal with all of these things. Nodes don’t need to be reliable or even very fast. Reliability is solved by using erasure codes to redundantly store data even if many nodes disappear and repairs can be done when the availability drops too low. Nodes are also used in aggregate so that slow nodes in aggregate could add up to super fast transfers. Additionally every transfer is overprovisioned so there is no need to wait for the slowest nodes.

The efficiency of the network comes from the fact that storage provided is mostly supplied by systems that would have been online anyway. There is little to no additional cost for many SNOs which is why they will often take any income they can get. I what you suggest would happen, Storjlabs might as well build and rent those servers in data centers throughout the world themselves and not bother with paying SNOs at all. It would go against everything Storj was built for. So I think you’re just wrong on that one.

yeah my knowledge on terminology isn’t great xD, ill remember to call it hot storage
it will be very interesting to see what happens in the future with storj thats for sure, i think it will be magnificent

You misunderstand what i mean about local data storage…

i mean, if i want to store stuff reliably on my local computer i will need to use something like a raid 10, to just be basically convinced that the data doesn’t get corrupted in anyway… and then we are not even taking into account that i will need backup’s which incase my data needs to be flawless then also needs to be store reliably.
which then comes into the realms of mass production…
running 10 drive raidz2 which then is only 20% loss on redundancy, so local data storage becomes a question of if one really can store data at home and keep it safe without a near industrial setup…

which is why i say that i don’t think local storage will be a thing in the future… it will be for cache and boot, stuff that you may want to use incase of offline…

but the real functionally will be access through collective data online instead.
ofc by then they will most likely have more storage in their brains than we have in our server racks today.

the future is collective data

It’s not necessarily wrong. Data that isn’t used is often called cold data even if it’s on hot storage. Which I find confusing, so I prefer to avoid that terminology.

I’m not sure I entirely understand just yet. You’re saying that local storage is going away in general, not specifically for use in storagenodes? I think that is partially the case. But I also think that the people who would be interested in running a node are probably the kind of people who would give up their own storage when they pry it from their cold dead hands. :wink:

I would like to add that, yes you probably want to protect your own storage with redundancy and backups, but neither of those are necessary for your node. And I think in general if you are thinking in that direction, you’re already thinking of spending too much money on it. With perhaps the exception when you use spare space on your personal NAS that already happens to have a RAID6 array. Which is the case for me. I would almost argue that it shouldn’t be viable for SNOs to run such setups or rent servers in a data center. That’s not the point of Storj and as long as that’s viable, they’re probably paying us SNOs too much for the service we deliver.

Every time I hear someone talking about their HA setup with massive raid arrays and high speed CPU’s, tons of RAM and super fast fiber connections, the next thing I think is… someone with a raspberry pi is making the same amount of money as you do. Ar you sure you’re on the right track here?

Bad idea to have 1.5$ per TB/month - my expenses on node can be much more (for example near 100$ per month for HDD with 10TB usable space on RAID10 and with 1 GB port). In the last month mainly I see only income traffic which is “not paid” and I see that my HDD using by very low tarrif and on the final stage I don’t have any warranty that uploaded data will be downloaded even over some time !!!

i’m saying storing data reliably is hard, not something most will have any interest in, or can do in a home setting for extended periods, i don’t believe it’s a thing that will exist in the same degree in the future…

when collective data storage really gets adopted, then imagine the compression when what you save only takes 1% of the space if 100 other people also wants to save it.

sure some stuff people will want locally, but for how long… and you have professionals doing everything else these days… why would data storage be any different in the future.

in the past you cooked food on a fire, today you have an oven installed by a technician, atleast in many cases if you want it to be legal / compliant.

thats usually how stuff goes…

so whats your argument there… mass production doesn’t make stuff cheaper… which is why data centers work, but granted just like computers in the past took up the same room as datacenters today, so in the future people will carry them in their pocket.

but we won’t get there immediately, so what we will see with storj is datacenters becoming smaller and more localized.

but still mass production makes stuff cheaper… and running a node will require some work… so the larger the node, the less work for more profit, basic mass production concept.

the raspberry pi argument is pretty valid lol… but they cannot scale like a server can, but its certainly not a bad option… but for now i’m going to see if i cannot make my current hardware go into the positive… by just keep stacking more large drives lol

It does, but mass production does not make storing data on a data center with redundancy and HA setups cheaper than storing it on a cheap HDD at home. Especially if you consider the datacenter would also like to make some money from their customer and takes a good chunk of money off the top.
But more importantly, Storj data is often stored on HDD’s that were already out there. It’s highly efficient because the SNOs doing it right didn’t spend a penny when they got started. I know I’m one of them. I know there are lots more.

The raspberry pi argument is something that’s always smart to keep in mind. Storj is built to ask very little from node hardware, so almost everything else is overkill. You should aim to keep your cost as close to 0 as possible. Which is why I find it so important that the earnings estimator give a more realistic view of what you can earn.

Well I hate to say that but that is your fault if your setup is that expensive… Try to find a cheaper setup.

1 Like

It’s been hard on HDD’s lately. I can’t disagree with you there. If you share those HDD’s with other purposes an SSD cache would do wonders, but again I would only go for that if it also helps your other use cases of that same system.

What about this process makes you think it can’t be done on a raspberry pi? I haven’t heard that before.

That’s the point. Stop worrying about the errors. Live care free and take the small hit if you lose a node and drive. It’s rarely worth the expense to protect against that and the network already does that to ensure customer data is never at risk.

I am programmer and system administrator - I know that any resource have some cost. Good resource will have good cost. Several mounthes I tried to check posibillity to make money with STORJ - i am thinked about synergy… Now I hear about try to find more cheap resources !!! Well and it’s seems next I will hear - your equipment is very old and slow … It’s seems i will go out from project, beucase i can’t control prices for provided to the network resources and have very low rewards during 4 monthes of my expiriense. Will see to the side of SIA network.

But no one told you to go out and buy expensive hardware to run storj that is on you. We all have some kinda background with IT doesn’t mean its cost efficient to go out and spend 1000s of dollars on the idea of making a few bucks per month. I have very expensive hardware already but doesnt mean im going to use it for running nodes on, I took hardware I had threw some part together made 900 dollars to me thats worth every penny.

2 Likes

Guess what: I’m a programmer and system administrator too :smiley:
That however doesn’t mean anything…
If you want to go, just go. Nobody is holding you.
Also nobody can’t help you if your expectations are wrong and investments poorly chosen.

Sure if you use an RPI and connect 2 HDDs to it we might tell you that your hardware is too slow. As a system administrator you should know that you always have to find the sweet spot for the use-case you have in mind.