Let's talk about the elephant in the room: The Storj economic model (node operator payout model)

I understand the concern. Understand that these examples I gave are just that, examples. Nothing is decided yet. My point is that it isn’t just a decision to slash what SNO’s get paid. There are a lot of design discussions going into just about every aspect of how this all works right now. Everyone wants to make sure this is all sustainable.

We want the Community feedback. If you have ideas on what would work better than the current system, we want to hear from you. Look at it like a long term discussion. Details are going to come out as things bubble to the top and we look for comments from the SNO’s. At the end of the day, there are business decisions that have to be made. We can’t pay SNO’s more than what we are bringing in. But what we are bringing in may be more dynamic as we add features as well as additional capacity faster to help offset those changes. There may be incentives for different performance aspects.

It’s all up in the air right now. I can’t tell you what is going to happen, nobody can, because nothing has been decided. We’re in the exploratory phase. If you have ideas, we’d love to hear them. Keep posting them if you got them.

1 Like

Assuming traffic starts going up (so far it has done so only briefly in the summer) I think payouts for node operators above some minimum (the minimum applies to /24 so no cheating by creating 100 small nodes) should be frozen (traffic goes up, you earn the same) until USD/TB rate goes down to the new value.
This way would avoid the surprise of “hey, I got $70 last month, but only $35 this month”, instead it would be $70 until egress doubles and then the payments start going up again.

The minimum (below which payments would still increase) is so that small and especially new nodes do not get stuck at $1/month for a long time.

Of course, this assumes that the growth would be fast enough so that this transition period would not take until 2030.

Personally I would be OK with this, I do not know about the others.

This is good.

This is not good.

STORJ attracts people because of its openness and a level playing field - easy to implement/participate and equal chance of success. That is made possible because the quality control is at the level of the network, not the SNOs. Most of the reliability (or unreliability) of the nodes is already accounted for and priced in. There is no need for legal agreements, certifications, etc. This system is powerful. This is decentralization done right. And this is how STORJ wins the world… hopefully.

However, I very agree STORJ needs SNOs run as a business at scale (petabytes) by dedicated professionals on dedicated hardwares. There is a huge fixed cost of running a data center and thus there is huge economy of scale. So these businesses, in the long run, will have much lower cost per TB. Low-cost large-scale SNOs allows STORJ to compete and is a solution to the current loss-making dilemma.

This is just a sketchy idea but the current payout scheme can be tweak to make that happens. First of all, allow each SNO to scale faster than currently is. I’m talking 10s of TB per month, or more.

Second, the fixed payout per TB ($1.5 for disk usage and $7 for egress) needs to be replaced by a regressive/variable rate. The regressive part means the larger the capacity/traffic, the less payout per TB you get. This should works since with larger scale SNOs will have less cost per TB. Also large SNOs means less decentralization so lower rate acts as a tax on centralization. The variable part is another layer of adjustment to reflect supply and demand. If people migrate their data to STORJ faster than SNOs can increase capacity, the payout rate goes up.

By this regressive/variable rate, STORJ can have a nice mix of “terabyte” individuals - the heart for community and decentralization - and “petabyte” businesses - essential for lowering overall cost and sustainable growth.

So let’s scale up the SNOs but let’s NOT use legal agreements and certificates.

5 Likes

The legal side of things is likely due to the requirements of laws that govern how data is stored, where it is stored, and by who. If we want certain customers, we have to abide by the laws that govern the data, no matter how backwards they may be. (But to reiterate, this was an example and not something that may or may not happen.)

@Knowledge
Oh… then that’s OK. Well… as long as it is a choice for SNOs to make. Get certified and then you are open to additional demand and/or higher payout rate. Fair enough.

Yeah, I deal with GDPR at work… Unfortunately it doesn’t care about technological assurances. Any third party data processor dealing with personal identifiable information needs to sign a data processing agreement. If only dealt with using data processors at work, not with providing that service. But I’m guessing the requirements are not something your average at home node operator could meet.

Reading up a little more there may actually be architectural difficulties here as well. Reference: Art. 28 GDPR - Processor - GDPR.eu

It talks about only processing personal data on documented instructions. Neither Storj Labs nor node operators know what is being processed. So it’s impossible to document this.

It also mentions helping comply with personal data request obligations. I guess as long as a node operator stays online, this can be achieved by the customer itself.

Allowing for audits including inspections by the controller. Well, sure… I’ll invite Storj customers or Storj Labs employees over if they want to come to the Netherlands to check out my setup. Haha.

It also involves anyone with access to personal information signing a confidentiality agreement. Well none of us have access to any information.

And this is just one law. One that is clearly not written to take architectures like Storj into account. However, on reading this I actually don’t think it necessarily involves hurdles that can’t be overcome for at home storage node operators. It would be interesting if Storj Labs could provide standard data processing agreement templates for at home storage node operators for consideration. Of course there are many other certifications that go way beyond the requirements of the GDPR DPA. Like SOC2 for example. Which requires independent external auditing. However I would argue that Storj could possibly be considered SOC2 compliant on the network level without having to individually audit node operators for compliance. Since things like disaster recovery are intentionally not handled on the node level.

I don’t think I have to state this, because I think everyone agrees, but it would be best if any approach to these legal requirements or additional certifications would keep as many node operators into the mix. I’d hate to be pushed out by tons of paperwork and unmanageable legal or audit requirements. But I’m sure Storj Labs has some good lawyers who could look into those things.

3 Likes

There are plenty of potential customers who would want to use more space than the total current network capacity on their own. Those can currently simply not be onboarded. You gotta think big if you want to go exabyte scale.

I like the idea of tuning the Reed Solomon settings. Though not for the same reasons. Even if Storj would over lower redundancy tiers, it would look bad reputation wise if they lost data. Furthermore, having fewer pieces of redundancy would just trigger more costly repair.

However, lower expansion rates could possibly be created by raising both the Reed Solomon numbers. So instead of 29/80, they could go 100/160 (just a random example I didn’t actually calculate what the impact would be). Something like that could actually offer similar reliability with the only downside of having more overhead, which could be partially fixed by increasing the segment size to 128MB as well. This could ensure similar reliability at the cost of not dealing as well with smaller file sizes. And it would incur a higher segment fee as well due to higher cost of metadata storage for more pieces to track. But if customers use the max segment size, it would match total cost for segment fees because you would also have half the amount of total segments.

I believe something like this has been mentioned by Storj Labs in the past as well. So this may already be considered.

and I think current reported free space on storjstats is not correct, since you can see sharp step from ~6.8 to ~22.4 PB on october 13th

Well then, let my add my 2 cents. They are probably totally unsorted, because I still have to make my mind about the whole situation.

First of all I have to agree with @BrightSilence that the wording and intention is unclear. So it is hard to make good suggestions. Basically the only hard fact is that Storj pays to their node operators more than they receive from their customers, so naturally if you talk about viable economics, the idea of cutting node operator payouts comes to my mind as well.

However the current economic situation with inflation all over the place, it would be a bad timing to reduce payments to node operators. Just to give you 2 examples: Within a week I personally have received announcement from a server hoster that they will increase their price by 300% and my electricity provider announced an increase of 100%. This goes into effect immediately resp. by January 1st. Prices are going up everywhere making running a node more costly.

On the other hand it is a good time to increase prices to customers as in the current economic situation this is what everybody expects and (kind of) understands.
As Backblaze has been mentioned as competitor, I have had a look at their prices and they charge $0.005 GB/Month ($5) for storage and $0.01 GB/Month ($10) for Egress.
So Storj could increase prices and still remain below their level, like $4.50 for storage and and $9 for Egress.
As a bit of a compensation Storj could increase the discount for payment with Storj tokens.

That’s the revenue side. The other side is the cost side. It has been mentioned, we see the test data still occupying space and inducing cost, and maybe the free tier should be reworked in a way that data gets deleted from accounts that are no longer in use. (You could impose a requirement to login monthly or something like that to make sure these are not throw away accounts that don’t bring any revenue but only cost money.)
Another idea that may work is a dedicated satellite for free accounts where node operators could subscribe and agree to either not getting paid or agree to less payment.
I do see and understand the need for the free tier and I think it is a great way to attract potential customers but the question is, is it required to have this data online for free forever or can this data be deleted at some point or slowly transformed into paying data for example by restricting the free duration and then start to charge for it?

Without a doubt to keep SNOs on board, running a node must be rewarding. It needs to work for both small and large setups and it must be rewarding from day one. If simply payout gets reduced, then it won’t really work for smaller setups anymore which will probably lead to larger setups and less decentralization.

But again, it is hard to make detailed suggestions, without knowing which problem should be solved. In general I would say: Incentivize the node behavior that you want to see (like with surge payouts, bonus payments or earlier (partly) return of held amounts) and penalize what hurts or negatively impacts the network.

3 Likes

This would be useful if it was possible to limit used space for each satellite or set priorities.

3 Likes

It was mentioned elsewhere that they made corrections to how this is calculated. It turns out they were being a little too conservative about this. These numbers will remain an estimate as it is impossible to actually know whether free space reported by nodes is correct. Some might overreport, while others might report correctly but will expand whenever free space starts to run out. (As we speak I am expanding my array to facilitate more storage)

same here, just last month installed 3x3tb new nodes. and each already have 120GB of data.

1 Like
2 Likes

That would be ideal.

Lots of good topics here. I need to block time to go through this. It’s hard to keep up. We have the twitter spaces later today. We’ll add to the list:

  • Compliance and certification
  • Network capacity planning and available space
  • How we calculate space
  • Near-term pricing and costing changes
  • Free tier vs free trial
  • Dials we can tune (R/S, Segment size, node selection criteria)
  • Business decisions and strategy in general

I’ll try to cover as many of the main points as I can today. Convenience link:

https://twitter.com/storj/status/1588231558070239236?s=19

6 Likes

If an AMA here or on reddit would be useful, like this post.

6 Likes

If you would be open to sharing information about your cost structure, your method of evaluating ROI, just generally what you want as a node operator, and what would make it viable vs. non-viable, please like this post.

All information will be treated as confidential. We’ve gotten some great insight from community members along the way and would appreciate the opportunity to engage more on this topic.

7 Likes

For anyone interested this topic was discussed during the twitter spaces. You can listen to it here: https://twitter.com/storj/status/1588607064955588608

Thanks for addressing my question to the extent possible at this moment @john. I’m looking forward to seeing more specific ideas/proposals in the course of December and later.

7 Likes

I’m looking forward to collaborating on the proposals. Thanks for joining us and is was great speaking with you live!

2 Likes