Update Proposal for Storage Node Operators

I would even support some kind of this. data is loaded and deleted all the time so if you have bad connection you will net get enough data to store.
I see it today, I have about 100-130Mbit of Ingress but storage amount grow slow, because some old data is deleted all the time. As it is Object storage you cant change file, here, only upload new and delete old one.
This mean if you have bad connection you will not get lot of data.

in my case i have 250 TB of tada.
3x250 will be 750 a month. in this case i will even agree with 0 Egress payment or some minimum like 1$ per TB

1 Like

$3 with an expansion factor of 2,7 turns into $8,1 storage cost per TB for Storj.

Even with an expansion factor of 2 it would be $6.

2 Likes

I propose paying $30/TB storage and $100/TB egress for nodes!
Lets all just propose random numbers!
It does not matter if they make business sense or if there are customers actually paying it! We just pull out numbers out of thin air and hope for the best! We currently don’t have enough customers, but if we raise egress by 3$ from 7$ to 10$ I am sure we will find an infinite amount of new customers.
/s

Edit: To the one flagging this post as off topic. This post is not off topic, but the random numbers people post here are.

I didn’t want to make an offtopic, but @john mentioned that part of the issue with elevated costs is that Storj is paying to provide free edge services.

Given the point of that those cannot be run by SNOs, the next question should be why are they so expensive and how can it be reduced (if possible).

I know (and all of us should) that paying 20$ per TB transmitted while the customer pays 7$ is not feasible, but reducing costs in other areas may get us the upper limit of the proposal :slight_smile:

This could be also something that make clients think about hosting it themselves. Sure, for some clients this may not be reasonable and Storj may make an exception if the deal is good, but it should not be free by default.

1 Like

my 2 cents…

Some SNOs have very reliable and stable setups.
Some SNOs have very fast internet and really cheap bandwidth
or both…

How about letting us choose the payment scheme we want? (And with some conditions?)

A draft example:

Plan A.
Pay more for bandwidth, pay less for storage

  • condition - Stricter uptime requirement (maybe a minimal 90% uptime?)

Plan B.
Pay more for storage but less for bandwidth

  • condition - Minimum speed required (eg: > 150 Mbps up and down)
  • Storj can perform speed tests between several Plan B nodes to verify that they maintain high speed all the time (and pay peanuts for bandwidth tests)

Plan C.
If the node cannot satisfy the selected plan, fall back to this plan and Storj will pay a reasonable amount (but MUST be lower than the above two options)

Plan D.
Diamond Node - Granted by Storj
Paying high prices for both storage and bandwidth
Conditions:

  • Super reliable and stable (eg: uptime > 96% in x years)
  • Ultra-fast internet connection speed (eg: more than 500 Mbps up and down)

Storj can identify node types and distribute storage pieces among different types of nodes while achieving BOTH stability and high-speed efficiency. (and also cut costs but SNOs are still happy… maybe?)
We SNO also need to fight for what we want. (Try to qualify for Plan A or B or even D or … go home? :sweat_smile:)


Of course, Storj internals also needs to make some changes, especially the edge services
(but I think this is beyond the topic and should not be discussed here)

As for how much Storj should pay for the different payment schemes
I have no idea :joy:
open for discussion

Clearly said sarcastically to mean the exact opposite and followed by a statement saying Storj doesn’t work without node operators and they intend to keep running a node rewarding for node operators. Quoting this without context is not ok.

Well, yes, but unless they can be reduced to 0, which I doubt, there should be an incentive for customers to avoid using them. More than there is now, as clearly current incentives alone don’t work.

4 Likes

If I had to guess, Edge services probably gets used the most by free tier users, because of the desire to share links, and this makes it ripe for abuse and heavy load when abusers write scripts to make multiple accounts, string them together and use them as a larger repository or a free Chia drive. Storj Labs should consider possibly keeping the free tier, but require a credit card if they want to use Edge services. This would reduce abuse and likely reduce edge overhead significantly. (There are alternatives to credit card as previously mentioned, like requiring phone numbers, whatever the choice is here, reducing the abuse would be helpful in reducing cost)

11 Likes

I would support this as well, similar to how BB does it. Requires a credit card, but we can set billing limits to prevent accidental bankruptcy (pun intended :stuck_out_tongue: )

Wow. That should be taken into account and put some solutions to avoid such waste of resources :frowning:

It will not fix the issue with SNO payouts being much bigger than customer pricings, but it will for sure help to reduce those edger services costs - which are several times more than SNO payouts as exposed before.

Yeah, I wonder if the TB of edge service bandwidth is more expensive than what the customers pay for it (even if nodes were not paid)…

Another way would be to limit the speed or limit the bandwidth to some small value. Good enough for testing or the occasional file share, but if you want to utilize the account fully, you need to use the uplink or self-host the relevant gateway.

Can Storj share what percentage of the production data is free tier? And what percentage of that is probably abusers who will leave as soon as it won’t be free? So we will have a better understanding about the real state of the network.

1 Like

I honestly do not know why storj hasnt put a stop to this abusing the free account thats probably costing storj alot more they need to pay. This should be the first thing they change is limit the free account to 25gigs or less. No other company offers 150gigs free.

We don’t know what percentage is abusers or legit users. People are using the service. Is “use” abuse? We would have to identify who has more than one account, by limiting them via CC or Phone, etc. Then we would have an idea who is abusing the system.

1 Like

Okay, and what percentage is the free tier, meaning user isn’t paying for it at all?

Potentially slightly off-topic Idea for using Test-data:
TLDR: Use Test data mainly for nodes verification.

Current Observations:

  • Test-data was used to simulate larger surges of data and test reliability and performance of the network
  • The above tests are mostly done and are not needed anymore. There is enough customer Data to replace most Test-data.
  • Because of that Plans are to reduce Test-data. I approve of that.
  • My newest node that is still at the beginning of vetting is getting most of its data from the US1 Satellite which seems to be the production one. (If my understanding is right that testdata is only on some of the satellites)
  • I assume that new nodes have a pretty high chance of going off after only a short time as they are just testing out the network and seeing how it works and if they should do it long term.

Ideas:

  • Give nodes larger chunks of (or only) Test-data that is used to test the reliability of the node.
  • Once the node is vetted the Test-data is slowly replaced by the real customer data.
  • Do not repair this Test-data. It is just Test-data you can always recreate so no need to use Egress costs to repair it.
  • Maybe even pay less for this test-data? (As long as its then phased out once the vetting is completed and is not sitting there causing a satellite to be full while still having test-data). This includes storage payout, Egress and Repair payout.

Effects:

  • You will now use Test-data only as a sort of cheap verification tool of node reliability
  • You will not keep paying for Test-data longterm as its only used during the vetting process
  • You will save money on repair costs of all those new nodes shutting down very soon after creation.

Requirements:

  • Testdata will have to be spread out across all satellites to make this work.
  • The Network would have to be able to differentiate between Testdata and Real Data.

Overall I do not know how much impact these new nodes that quit during vetting actually make and if such a implementation is financially sensible. But seeing as the network makes way more copies than it needs and only repairs a segment once a lot have already failed this might reduce the total need for repairing data as more files might never reach that low of a segment count.

I don’t have that information.

I’m not sure you can isolate test data in a way that could apply separate rules to it versus other data. Such as not repairing it. Such filters among the data may not exist. I would certainly think if the testing is finished, repairing test data is a waste of resources and money and that they would have stopped it from repairing if they could.

I don’t think they are looking to do anything that complicated here. Test data is, for the most part, pretty old so only older nodes would be impacted significantly by the data loss. Since this data isn’t pulled on for egress, it doesn’t really produce much income otherwise. Replacing it with customer data is better, but of course, that replacement data won’t be 1 to 1. I would imagine, it will be spread out among all nodes, so newer nodes will gain more than older nodes that carry test data. But this is going to be replaced as new data is added, so I don’t expect it to be a dramatic loss or gain, rather something that slowly happens over time. Just some nodes will see their data go down and not up.

Purely depending on whether or not we see a lot of attrition of nodes when the final decision is in on the numbers. If we lose significant number of nodes, the nodes that remain will gain a lot of data if they have free space available.

Maybe if someone can look that up and eventually present it on the Twitter spaces, that would be great. Or some predictions about how many data will probably leave the network as soon as it won’t be for free.
And also what is the borderline ETA you are planning to put the new, not yet agreed pricing into an effect - is it by June 2023, by the end of the year etc.
The reason I’m asking is we have some monthly expenses to run the nodes and based on that we have to store certain amount of data for it not to be an expense for us, which in my case I’m just starting to approach. If cancelling a free tier would mean we would loose let’s say 50% of that, then it would be a problem for some of us even before lowering the payouts.
And by this I do not mean it shouldn’t be cancelled. Myself I’m storing backups on Storj for free, but I’m 100% willing to pay for every byte I have stored and that should be the case for everyone.
Thank you.

A lot of discussion happens both in the company Slack, as well as in meetings I am sure. I am not in every meeting, and I’m certainly not in senior executive level meetings. (Maybe one day!) So, I don’t have that info, but what I can say is that this is a proposal, and that it’s open for discussion and then I believe they will discuss things and figure out what work needs to be done, road map that, and then determine when they will announce the formal changes. I am sure John will talk about some of this in the coming Twitter Spaces meeting. But as he mentioned before, this is not something they are going to just drop like a bomb, there will be plenty of notice and time to plan around it.