Let's talk about the elephant in the room: The Storj economic model (node operator payout model)

I don’t like it and I don’t see how this should work.
Let’s take your case for example with like 250 TB of data. At $10 it would mean you would have to put $2500 into held amount?

And let’s say if you keep your node growing and get +1 TB in a month, that may earn you $4 but you would have to put $10 into held amount? How shall this work? So you may get paid $2 and owe Storj $8. Next month you get another +1 TB and get paid another $2 but owe Storj now 2x $8 ?
What if you quit now without graceful exit? Storj has not receive $10/TB in held amount and I don’t see much incentive to do a GE. I also don’t see much incentive to keep the node running, when I keep owing Storj more than I earn.

Maybe I don’t understand the idea correctly, but this is what was coming into my mind.

Furthermore there seems to be much risk laid off onto the SNOs, while there is one important factor they cannot influence, which is the number of pieces. So when Storj believe thei repair costs are too high or repair is too frequently, they can adjust number of pieces. So I believe they can bear a big share of the risk too.

True, but I’m thinking long term here. Assuming Storj figures out how they can be profitable, 3 years of good operation should be plenty to cover repair costs. If they can’t eventually manage that, this is doomed to begin with. Until then, let’s burn those tokens on subsidies. :rofl:

That ain’t happening. I’m running nodes on 15 year old HDD’s, haha. Most will work just fine. Sure, those are very small, but I have some larger 10 year old HDD’s too. It doesn’t make sense to not use those HDD’s and with the suggested system, starting a new node on those would impose new held amount too. It would be more profitable to keep the old node running on the old HDD’s.

Haha, we can settle for X years. The important part is the concept, not so much the number of years used. Storj can figure that out based on the data they have about nodes. But I think there is something useful in this idea. I like it.

It’s a difficult task. Right now, the repair worker must be trusted by the satellite, since it has a direct connection to the database.
Seems to implement a true distributed repair worker there should be implementation to close this gap.

Have a look at this post to see how it would work: Let's talk about the elephant in the room: The Storj economic model (node operator payout model) - #219

That’s how it works now. But you could keep the same repair worker except let it outsource the actual repair to distributed repair nodes. At that point it can probably run fine on GCP again as well and you might not need as many as it would just be a coordinator.

I think it cold be the same system with hold amount, but is just not stop on month 9 and charge only 10% from earnings till it cover hool amount.

Yeah, but when I look at month 7 for example:

There it says storage at the end of month 7: 4.05 TB
Held amount: $27.77

That’s nowhere near $40.5 (at $10 per TB). So if I would quit without a graceful exit in month 7, the repair costs of $10 per TB would not be covered, correct?

Sure, Storj still takes on some risk, but $27.77 provides plenty of incentive for GE already and it’s no worse than the current system. The alternative would be for the node to make no income at all in the first few months. I think that’s a bigger problem for node retention in the early months.

2 Likes

You’re right. Let them spin until they fall apart. The oldest one I’ve got is 11 years old, but I’m using it for temporary downloads (torrents and such). Lately, sometimes it crashes. Switch off, switch on and it’s good again. But if it was holding a node I think I was already disqualified.
I was thinking more along the lines of a SNO that has n disks/nodes, the last one fills up and he buys a new disk. Instead of starting a new node on the new disk, I’d say it would be better to copy the oldest node to the new disk and start the new node on the oldest disk. Odds are that when a disk fails, it will be holding a very small node.

Anyway, my personal statistics tells me that spinning hours and data R/W are important. Maybe your 15 year old disk had an easy life siting in a PC that was turned on once a day(?). My 11 year old disk was born inside a 24/7 NAS. I had 4 equal disks, this one is the last still working. Oddly, it was the only one that ever had bad sectors since a young age. Bad sectors are good…

My file server has drives with ~70k power-on hours. One drive has failed (in a rather nasty way where it would hang the SATA HBA), the other ones work OK.

However, the drives in my file server do not see high IOPS, I also have a script that accesses them once a minute or so to prevent them from unloading the heads.

Ah… you see more incentive when it’s Storj owing you?

We should probably move the Graceful exit to a new topic. This is geting huge.
The problem with Graceful exit, as I see it… We start geting data at an increased rate, the small HDDs/nodes become a rarity by the day. I think the big majority of new nodes are on bigger HDDs, with the top tier of 16-20TB becoming more popular. These new nodes will get full (hopefuly) before EOL, and I realy don’t see anyone transfering 14-18TB of data to a new drive, just because it will take very long time and will get disqualified. robocopy takes over 24h/TB, so is not an option for more than 10TB of data, rsync will take the big part of HDD access time from engress, and with all the rechecking and resyncing, will take probably a month, resulting is disqualification also… I didn’t used rsync, so maybe I’m wrong, but this seems logical to me. So in conclusion, the big majority of nodes in 10-15years will not Graceful exit, they will just crash and die. This is not because the SNO is careless or malintended. And I don’t see anyone pressing the Graceful exit button just because he starting to see a few bad sectors poping out.

2 Likes

Imposing collateral may scare off people from becoming a SNO. One of the reasons I never did SIA was that they wanted collateral up front.
The current held amount approach was good way to avoid the collateral issue.

3 Likes

I would like to see any participation from Storj team, they know this kitchen from inside and can run this discussion. Because it more looks like monolog of SNO’s but shold be a dialog.

If you can, please join the twitter spaces in about 15 min. I do my best to keep up with the conversations here; Clearly I can do more to better participate and respond more frequently.

The twitter spaces starts at the top of the hour, so please join if you can.

After the twitter spaces I’ll go back through and try to put some more content in this channel as well this evening and over the weekend.

3 Likes

I will listen it, cant speak as it is late evening here, and children’s sleeping, but thank you.

1 Like

I joined Storj as a SNO about three years ago. I consider myself a hobbyist as I do not see a commercially viable path to be a node operator. In the last three years I have accumulated about 20 TiB of data and 2500 Storj tokens (worth about $750 now) accepting as much data as the network will send me.

I just calculated that as a commercial node operator if I could fill 10PiB, it would cost me roughly $5/TiB/month.

I do believe that there is more data demand than there is capacity, that demand varies widely varying willingness to spend on storage. Since Storj manages both the network, pricing and sales, it falls upon Storj to come up with a strategy to fit an economic model. Today I simply do not see what Storj is planning in order to shift from being a hobbyist network to a commercial network.

1 Like

I should apologize to the community if this isn’t clear - the majority of the growth and adoption in terms of volume of data stored and revenue generated is coming from traditional web2 businesses and educational institutions who are transitioning from other web2 services, mostly AWS S3.

We have great customers in Web3 - early adopters like Pocket, Ankr, Harmony, Oortech and others showed leadership and helped as references to break into Web2.

Now we’re growing with innovative companies taking advantage of the performance, security and economics of the service like CIMMYT, DiRAC/Univ. of Edinburgh, Atempo, Inovo, Iconik, ixSystems, Hammerspace, Metastage, Cloudflyer and more. These are established and established and credible tech companies.

7 Likes

so as they only transfer data to storj, thats why usage is so low now, they not use it yet?

1 Like

That’s not good. On average you don’t make 5$/TB/month.
And if you were a business, big enough, you would buy disks much cheaper…