Let's talk about the elephant in the room: The Storj economic model (node operator payout model)

You’re gonna be building that held amount back up again as well though, so that doesn’t help much. (Unless you start your node with just 100GB assigned and wait 9 months until you give it more space)

I guess the bottom line is that with a rotational approach of adding and removing nodes, you may be making more money and if you cheat the held back system by staying small during the first 9 months, you don’t have to bother with the long graceful exit system either. That’s what current incentives kind of tell us to do. So you want to prevent SNOs from following that approach.

  • Fix the held amount system by making held amount a fixed amount per TB stored and keep building it as the node grows
  • Give nodes an incentive to keep old data, even if there is basically no egress on it anymore.
3 Likes

That’s true but that’s always the case when you start new. But if you compare it to the alternative to have your node running until the drive dies, and that’s only a matter of time, that’s when you will lose the remaining 50% guaranteed.
So for a node that does not earn a lot anymore, it might be an advantage, to pull the remaining 50% and start all over again. But of course it all depends on the circumstances.

1 Like

What about paying more for repair egress?, old segments are the ones that more probably need to be repaired.

What i don’t know (and maybe someone can tell me) is what happen to the old pieces when a segment get repaired?. my initial thinking is that the segment is reconstructed, creating all new 80 pieces, uploaded to new random nodes, and then the old pieces are discarded.

if that is the case then the scheme that you propose should be tied to de age of the segment, not the piece.

2 Likes

I like that. I kind of already suggested to align it with normal egress pay. But even then repair won’t be enough to compensate loss of normal egress.

I don’t agree with this. New nodes end up getting those pieces, so why should they instantly get paid more before proving they hold the piece reliably long term. In fact, the lower payout for the piece could help Storj Labs recoup some of the repair costs as well.

The way I see it, it’s a revenue share. Storj Labs gets to make more money on the data acquisition early on and then the revenue share switches towards node operators for data retention. That puts the incentives in the right places.

Because those pieces belong to old files that aren’t getting any egress (cold backups). I thought that was the point you and @jammerdan were discussing, to avoid full nodes exiting because they are holding old stale data and to “boost static storage income”. New pieces belonging to old files will not generate egress just because the file was recently repaired.

1 Like

That’s a fair point. There won’t be any egress like with new data. But i kind of feel like you also haven’t really yet earned a loyalty bonus if you just received the piece. And if all pieces that have been on your node longer get higher pay, it doesn’t matter that these individual pieces don’t get as much earnings early on, the incentive is already strong enough to not want to give up your node. Besides, I kind of like the idea of letting Storj Labs recoup some of the repair costs by resetting the piece to lower payout.
I think there are valid arguments for both approaches though.

1 Like

I wonder if we can do something like that on repair egress too.
Repair egress means, a “good” SNO helped to repair the network. There could be an incentive on that egress if that is a rather old piece too. By old I mean the real age that this piece was stored on that node.
I don’t know if that would be a real incentive, because you never know when and how much repair is required. But it would be a reward for those who run long term reliable nodes.

So maybe such a node could earn himself up to get the same payout for old repair egress as for regular egress?

1 Like

You suggested the cold type of storage, which is cheaper but slow to retrieve. The Storj network is designed as a hot storage, so such behavior will be difficult to emulate, and also why do this at all?

On the other hand, the idea of some kind of data prioritization for a premium is tempting.

No. I am not going to watch a 40min webinar, just to risk finding out that all their talk is just a S3 storage https download with buzzwords. If that is not the case, I welcome you to give me a summary about what they do, and where it connects to Storj.

Well the thing is, this is the original discussion. The main problem for nodes is not that necessarily the price but that usage is low. There is only reason why this is currently not a problem and why the economics for the node operators still work: subsidies!

When these fall away, we need data! You scare away serious business, while at the same time these web3 startups you attract, according to STORJ not me, only make up a small portion of the data. You need to convince serious business to partner with you. Imagine veeam promoting STORJ as S3 backend for their backups, that would get us data and even more important trust.

I don’t get why we are making the two topics “graceful exit” and “hold back amount” so complicated. Why can’t we just hold back 10$ per TB stored forever? A node hosts 5TB of data. If he leaves the network, there will be 50$ repairs cost no matter if the node is 3 years or 1 month old. Why not just use that as held back amount forever?

@Alexey @jammerdan This whole debate you guys have about cold old data and so on is also distorted because of subsidies. When the subsidies fall away, egress will pay less and thous nodes will care less if the storage is cold or hot.

1 Like

If it is held forever, then it is not “held back amount”, but just a reduced payment. Why display it at all?
Right now, the idea is that my node earns the stated amount of money, but some of it is held “in escrow” until the node gracefully exists. If the node loses data, the money is not given back.
However, if some % of the earnings is “held” with no way of getting it back, then it is not “held”, it’s just a tax on earnings. Or, you can just state a lower pay rate and say that nothing was held.

Also, if the held amount is never returned, there is no reason to do graceful exit. unless you are proposing to hold the $10 in addition to the normal held amount, in which case new node operators will have to wait a long time to see any income.

2 Likes

You are right, held back is the wrong term, because you would never get it. Lets call it a collateral.

Also, if the held amount is never returned, there is no reason to do graceful exit.

Why does this matter in the current system? What is the difference between a node gracefully exiting or a node going offline? In both cases the network has to repair the missing data right?

Do we have this graceful exit yet?

How would a graceful exit even look like?
Would it be

A: The node does not accept new pieces and waits until all his pieces are deleted. That is free for the network. But it could takes years or even unlimited until all data is deleted.

B: The node signals goodbye and other nodes take over his data. But then the network has to pay 10$ repair per TB. This would be paid by the collateral.

No. In case of Graceful Exit your node will transfer its pieces to other nodes, so repair is not required, it will just move a piece and update the pointer in the metadata.

3 Likes

No. If you do a graceful exit, then no repair is required, this is why the held back amount is returned in this case.

Edit: @Alexey beat me.

This is why incentive is proposed for nodes staying long time on the network and nodes doing graceful exit.

2 Likes

I am an idiot, of course it works this way. You don’t pay the exiting node for egress, so it is free for the network. I wasn’t clear thinking.

So we have a graceful exit for free and a none graceful exit that currently costs the network 10$/TB?

Why not hold 10$/TB collateral? Seems a lot simpler to me than the held back amount calculations.

I currently host 4TB with 15$ held back. With the proposed collateral this would be 40$. Sounds good to me.

You need to take into account, that a segment not necessarily needs to be repaired.
Repair is only required when the number of pieces falls below the threshold.

1 Like

Well that kind of cancels out. The lowest availability is currently 52, so 28 pieces need to be recreated. Your 1 lost node is responsible for 1/28th of the cause that the segment needs repair and in order to repair it, the repair worker needs to download 29 pieces. So it’s pretty much 1 to 1.

1 Like

I believe these days ingress/node is more than what you calculate with. (Yes, I know, long term stats… but maybe it would worth to tweak the numbers again)
I had a 2TB HDD laying around and I started a new node with it by the end of August.
This node started to fill up quite quickly, so by the end of October, I started an other one…
On the first mentioned node I have around 400GB free space left as of today, so so after 3 months it stores 1,3TB of data.
On the second node, in 3 weeks, I have 175GB data stored.

I think that the 2TB drive fill very quickly. Most probably the held % will be still at 50% when it will be full… This means to me that it does not worth to start a node with less than 4TB… which lead to the point that average users/node operators can hardly start a node on leftover hardware (just to have some connection to the topic as well)

I’m seeing about 800GB ingress + 100GB repair ingress in the last 2 months. But I think the difference you see compared to the earnings estimator is more because the loss to vetting is actually less than calculated. I adjusted both numbers a little based on some new nodes I started myself. It now seems to match what you saw.

(and yeah, I adjusted it a little faster than I normally do, because I prefer waiting for at least 3 months to see a trend reliably change, but I think the past month and a half combined with Storj’s statements around growth probably present a reliable enough basis to update.)

Let me know if it looks better to you now.


Source: Realistic earnings estimator

1 Like

Yes, the vetting process is much faster than before. My new nodes became vetted for EU1 and US1 in about 2 weeks.
I checked the ingress across all my nodes for this month and it is already over 747 GB even though we are only halfway… they are all behind the same IP.
What is your opinion about the node size?
Maybe it would worth a calculation with 2-4-6TB node size and the revenue expectation considering the decreased egress rate for full nodes.

In october, my 3,9TB full node had a bit more than 9 USD revenue. This node is full since June:

Also in october, my other 3,7TB, also full node had 14 USD revenue. This node became full around the end of Sept: