Update Proposal for Storage Node Operators

Chia is still around and thats basically just a green wave hype train for the ignorant.
and StorjLabs has a business to run… they can’t pay SNOs more than they charge their customers.

I’m not happy with the proposal and believe something will have to give for it to make sense for most SNOs, but i’m fairly confident that i will still be running storagenodes in a year…
been buying 18TB HDD’s for over a year now, so i’m fairly confident i can make it work.

tho most with smaller disk models will be pretty much screwed, which will be bad for the network…

we also knew this was coming for a long time, ever since SNOs convinced StorjLabs to reduce their customer prices to the levels of Backblaze / backup tier services, there was bound to be a reckoning.

1 Like

Fully agree too!
I join this statement, especially about the loss of files. The loss of some of the files from 4% of the audit immediately leads to the loss of the remaining 96%, clients will not receive them, although they could still use them.

Due to 4%, the client loses all the other 96% Example: Проблема: Your node has been disqualified, но audit score 96%

My suggestion for disqualification is not to completely disable the satellite, but to partially penalize the payment and recovery time so that the client does not lose his other ~ 96%

sounds more like the basis of science to me…

thats interesting, because affording to StorjLabs the amount of customers and network usage has been going up with no signs of stopping…
it was said in many Townhalls.

also it wouldn’t matter how many customers / service usage a company onboarded, if they pay more for the services, it would only make the issue need to be resolved quicker.

Storj DCS is barely a two year old product, and StorjLabs being a new company trying to sell their first product, its not surprising its a slow climb, especially since they are among the very first to market with this sort of product.

and unlike many other crypto projects, StorjLabs actually have a marketable product, which doesn’t just rely on some kind of virtual economy that is based upon being able to make money in the future.

their product works, and it earns… today
do they need to readjust for future growth yes… but thats to be expected, until now their priority has been building the business using their runway…

but runways run out eventually…
afaik storj is one of the more viable crypto companies, and its not easy to make money on leasing out computer hardware.

also the internet storage needs are growing at an incredible rate and is still only accelerating, StorjLabs is very well positioned to become a major player in online storage.

also it was a proposal… now all we can do is wait and see what happens next… given the near 100% negative feedback, i assume we will see another proposal when StorjLabs figure out how to meet expectations from all sides.

also if you think the project is dead, why are you even still here… i guess you could be grieving :smiley: the old payment structure was pretty nice, i’m also sad to see it go.

6 Likes

I’ll try to clarify this one more time. It is not feasible for satellites to audit all data on a node. It’d be too expensive. So the problem is that there is no way for the satellite to know which data is and isn’t lost. Instead, all the satellite can do is audit nodes and determine whether they reliably store and provide the correct data. Because this determination is made on a per node level, it is not possible to repair only what is lost, because the satellite doesn’t know what is lost.
And since nodes are inherently untrusted, the satellite also can’t trust when a node says “these are the pieces I lost”. A feature like that would open the door to massive exploitation by malicious node operators.

9 Likes

No, no, no, guys. Let’s distinguish one from the other.

It’s one thing when you have a truly decentralized project, and another thing when you have a company, an office and a director. These are different things.

It’s one thing when you have a truly geographically distributed network, and another thing when you have miners sitting with 10-100 nodes each and working through VPS - these are different things.

It’s one thing when you “say that you have corporate clients,” and another thing when you really have them.

I see that STORJ had a goal to show the growth of the network, so they did not fight with large pools, and allowed them to work through a VPN with IP address substitution, and now they have come to the conclusion that the network has grown too much, and they are changing their strategy.

It’s one thing - if you really want to reduce the amount of payments without compromising the quality of the network - you close the registration for SNO, investigate the registration of 2000 nodes in 1 hour in early January 2023, introduce KYC for SNO, and it’s another thing when you just make the work of small operators unprofitable.

These are all different things, and no one knows what is really on the mind of the company’s management. This is not a DAO, there can be any goal here, starting from selling the company, and ending with robbing a bank and flying into space.

I don’t think STORJ will close. As already mentioned, CHIA won is still alive, but what the company’s management is doing does not compare in any way with the goals and objectives that the project declares. Maybe I’m just used to the DAO, it’s somehow clearer there.

But I would start by fighting pools:

  • 1 node - 1 wallet address
  • KYC for the operator
  • 1 operator - 2 nodes
  • In this case, it no longer makes sense to keep the limit for /24 subnets.
  • I would introduce mechanisms for identifying pools.
  • Would remove the quick disqualification
  • I would introduce a minimum file size of 2 MB. So that even if you upload 1 KB, it would be converted to 2 MB, or even more.

But I say again - guys, everything tends to end. And if the company decided to go this way, you won’t sit on their necks.

1 Like

I don’t disagree with this goal, but changes in payout may already have this effect. Running separate hardware is going to be a lot more costly in comparison to how it is now. I also wonder about the priority of this.

Furthermore:

It is trivial to just create multiple payout addresses and this would actually take away a method of monitoring pools of nodes. Furthermore, there are legitimate reasons for running multiple node. In fact, if you have multiple HDD’s, Storj Labs recommends running a node per HDD. This isn’t an issue for decentralization either as ingress on the same /24 subnet is spread among nodes within that subnet.

This would put a significant additional barrier of entry up. Many node operators might not want to comply. But it is an option.

Why? See the legitimate reasons I mentioned before.

It does, for running multiple HDD’s as separate nodes. There is no good reason to artificially limit this.

See literally the post above yours…

This would significantly slow down uploads and downloads and increase costs for customers for no good reason. I don’t even see what the purpose of this would be?

I get what you’re trying to do. But these methods won’t prevent people skirting the /24 subnet filter and furthermore introduce so many downsides for legitimate setups that they simply aren’t practical. It’s not that Storj hasn’t attempted to limit this. That’s why the /24 subnet system is even there. It’s just that this is really hard to do without having significant downsides as well. And again, I wonder… how much of an issue is this really currently? And will this issue resolve itself partially if the payouts drop?

3 Likes

No it’s not. If you run multiple nodes at your home PC, you still use single IP address within /24 subnet. I wroted before - /24 limitation is bullshit and should be cancelled. STORJ should limit ingress for single IP, not subnet.

I would remember to everyone that \24 limit is not to limit operator traffic, it is that one operator in pone place wont get more than one piece of one file or segment. If operator go to offline, that it not go offline more than 1 piece of file.Today ingress traffic is so bit then one \24 can get aroung 50-70GB of ingress data a day. It is very good number for fast filing of nodes. Just Storj in background delete old data al the time, so it not fill so fast. Also some clients delete some old data. or upload new data and delete old file version.

So \24 limitation is for file real decentralization.

3 Likes

Well, what is the difference in this strategy between /24 and /32? The fact that in the first case you divide traffic between potential 255 operators, and in the second case - between 1? The only point /24 of the restriction is to achieve higher decentralization by scattering traffic across different cities and countries. It has nothing to do with reliability at all.

Higher decentralization leads to more reliability because it limits the impact if a neighborhood or office building or provider goes down. That’s literally the reason.

4 Likes

Only the provider, because subnets are allocated to them. But a lot of people here use DDNS, which does not mean any reliability at all. My neighbors on the /24 network are tens of kilometers away. I don’t think any natural disaster is capable of covering such a large area at once.

But all the same, this is a half-measure, which carries nothing but inconveniences. Whoever wants to bypass the limit / 24 - he bypasses, and does not experience any problems. As a result, the network is spammed, and operators are going to reduce payments.

It is much easier to go through KYC and link a node to an individual. No one has died from passing KYC yet.

1 Like

DDNS doesn’t impact the underlying IP used for /24 filtering. But yeah, it’s not the most precise approach and it’s a half measure and there are ways around it. I agree with all of those points. KYC would be one way to do it I guess… though I’m not a big fan of that idea. Enough entities have my personal details already. I’d rather avoid that if I can. (Though to be fair, Storj already has most of this info about me anyway through other means)
Besides, if you legitimately run nodes in multiple locations (home, work, family) I see no problem with that either.

1 Like

Probably “well known” example:
image
I don’t think these addresses are for different locations. It’s a speculation, but at least it is just possible to have a bunch of IPs like that and let them point to one server rack. So to some degree the /24 limitation is artificial.

Well, that’s what I’m talking about - the network is spammed with fake nodes. If this problem leads to overspending of the pool, then it is necessary to solve it, and not reduce payments to operators.

if people will remove pro setups, then your raspberry’s will die in several days with 100% cpu and 100% ram usage and network will collapse. So it hold network stability.

4 Likes

But him leaving the network would mean, assuming he is holding 500TB of data, that every other node would get 0,02 TB of data. So in earnings terms pretty much nothing. And if people are complaining about inflation and energy costs, then I think storing 20GB of data more wouldn’t help them.
And lets face it, he is running a true professional setup. This can’t be compared with some copy pasted Docker commands on RPi and reused second-hand drives, running on free internet and free electricity (and we know nothing is for free).
Him leaving would be a loss for the network, as he is invested into it and I assume won’t be pulling the plug even after the new pricing.
And I don’t believe there are many of such setups, even though I believe he isn’t by far the biggest one.

This is entirely a network problem. Either they make a normal network for a normal load, where there is no input for Raspberry Pi, or they reduce payments, and even Raspberry Pi does not pay off.

I don’t need to think about all sorts of Raspberry Pi and other under-computers like Odroid - their owners knew what they were buying.

Once again, simply and clearly, this is a network problem. The owners of the network have not decided, either they work on Pro-equipment, or on Raspberry Pi. Accordingly, this is what causes all the problems. You can’t sit on two chairs at the same time, you need to choose who your customers are and who your operators are.

Tomorrow, another spammer will register 20,000 nodes per hour, and you will already be discussing payments of $0.01 per terabyte.

But how would the number of nodes affect the payouts per TB? I don’t see that.
Edit: but I’m talking about the stored data, not egress. Yes, it might affect what you will get for the egress, but on the other hand we want the network to be fast.

@IsThisOn

Well, imagine I’m the farmer; I sell mangos for $45 a kilo now to Walmart. Walmart says I will now give you $10 per kilo for your mangos.

If I explain my costing structure to Walmart, then we can look at possible solutions. Maybe they buy 3x the mangos from me and that offsets my losses. Maybe they will see my costing structure and say, we will pay you $20 per kilo and cover shipping costs which makes you profitable.

It doesn’t hurt to provide information from our side to Storj, so they can make informed decisions knowing what our struggles will be around their new pricing. I don’t for a second feel like they will say, oh egress is a problem in this country, lets boost their egress to $10 a TB to compensate, but at least they will know how their proposed changes will affect us, and if more who live around me drop out, they will have some context as to why.

1 Like