Update Proposal for Storage Node Operators

No, no, no, guys. Let’s distinguish one from the other.

It’s one thing when you have a truly decentralized project, and another thing when you have a company, an office and a director. These are different things.

It’s one thing when you have a truly geographically distributed network, and another thing when you have miners sitting with 10-100 nodes each and working through VPS - these are different things.

It’s one thing when you “say that you have corporate clients,” and another thing when you really have them.

I see that STORJ had a goal to show the growth of the network, so they did not fight with large pools, and allowed them to work through a VPN with IP address substitution, and now they have come to the conclusion that the network has grown too much, and they are changing their strategy.

It’s one thing - if you really want to reduce the amount of payments without compromising the quality of the network - you close the registration for SNO, investigate the registration of 2000 nodes in 1 hour in early January 2023, introduce KYC for SNO, and it’s another thing when you just make the work of small operators unprofitable.

These are all different things, and no one knows what is really on the mind of the company’s management. This is not a DAO, there can be any goal here, starting from selling the company, and ending with robbing a bank and flying into space.

I don’t think STORJ will close. As already mentioned, CHIA won is still alive, but what the company’s management is doing does not compare in any way with the goals and objectives that the project declares. Maybe I’m just used to the DAO, it’s somehow clearer there.

But I would start by fighting pools:

  • 1 node - 1 wallet address
  • KYC for the operator
  • 1 operator - 2 nodes
  • In this case, it no longer makes sense to keep the limit for /24 subnets.
  • I would introduce mechanisms for identifying pools.
  • Would remove the quick disqualification
  • I would introduce a minimum file size of 2 MB. So that even if you upload 1 KB, it would be converted to 2 MB, or even more.

But I say again - guys, everything tends to end. And if the company decided to go this way, you won’t sit on their necks.

1 Like

I don’t disagree with this goal, but changes in payout may already have this effect. Running separate hardware is going to be a lot more costly in comparison to how it is now. I also wonder about the priority of this.

Furthermore:

It is trivial to just create multiple payout addresses and this would actually take away a method of monitoring pools of nodes. Furthermore, there are legitimate reasons for running multiple node. In fact, if you have multiple HDD’s, Storj Labs recommends running a node per HDD. This isn’t an issue for decentralization either as ingress on the same /24 subnet is spread among nodes within that subnet.

This would put a significant additional barrier of entry up. Many node operators might not want to comply. But it is an option.

Why? See the legitimate reasons I mentioned before.

It does, for running multiple HDD’s as separate nodes. There is no good reason to artificially limit this.

See literally the post above yours…

This would significantly slow down uploads and downloads and increase costs for customers for no good reason. I don’t even see what the purpose of this would be?

I get what you’re trying to do. But these methods won’t prevent people skirting the /24 subnet filter and furthermore introduce so many downsides for legitimate setups that they simply aren’t practical. It’s not that Storj hasn’t attempted to limit this. That’s why the /24 subnet system is even there. It’s just that this is really hard to do without having significant downsides as well. And again, I wonder… how much of an issue is this really currently? And will this issue resolve itself partially if the payouts drop?

3 Likes

No it’s not. If you run multiple nodes at your home PC, you still use single IP address within /24 subnet. I wroted before - /24 limitation is bullshit and should be cancelled. STORJ should limit ingress for single IP, not subnet.

I would remember to everyone that \24 limit is not to limit operator traffic, it is that one operator in pone place wont get more than one piece of one file or segment. If operator go to offline, that it not go offline more than 1 piece of file.Today ingress traffic is so bit then one \24 can get aroung 50-70GB of ingress data a day. It is very good number for fast filing of nodes. Just Storj in background delete old data al the time, so it not fill so fast. Also some clients delete some old data. or upload new data and delete old file version.

So \24 limitation is for file real decentralization.

3 Likes

Well, what is the difference in this strategy between /24 and /32? The fact that in the first case you divide traffic between potential 255 operators, and in the second case - between 1? The only point /24 of the restriction is to achieve higher decentralization by scattering traffic across different cities and countries. It has nothing to do with reliability at all.

Higher decentralization leads to more reliability because it limits the impact if a neighborhood or office building or provider goes down. That’s literally the reason.

4 Likes

Only the provider, because subnets are allocated to them. But a lot of people here use DDNS, which does not mean any reliability at all. My neighbors on the /24 network are tens of kilometers away. I don’t think any natural disaster is capable of covering such a large area at once.

But all the same, this is a half-measure, which carries nothing but inconveniences. Whoever wants to bypass the limit / 24 - he bypasses, and does not experience any problems. As a result, the network is spammed, and operators are going to reduce payments.

It is much easier to go through KYC and link a node to an individual. No one has died from passing KYC yet.

1 Like

DDNS doesn’t impact the underlying IP used for /24 filtering. But yeah, it’s not the most precise approach and it’s a half measure and there are ways around it. I agree with all of those points. KYC would be one way to do it I guess… though I’m not a big fan of that idea. Enough entities have my personal details already. I’d rather avoid that if I can. (Though to be fair, Storj already has most of this info about me anyway through other means)
Besides, if you legitimately run nodes in multiple locations (home, work, family) I see no problem with that either.

1 Like

Probably “well known” example:
image
I don’t think these addresses are for different locations. It’s a speculation, but at least it is just possible to have a bunch of IPs like that and let them point to one server rack. So to some degree the /24 limitation is artificial.

Well, that’s what I’m talking about - the network is spammed with fake nodes. If this problem leads to overspending of the pool, then it is necessary to solve it, and not reduce payments to operators.

if people will remove pro setups, then your raspberry’s will die in several days with 100% cpu and 100% ram usage and network will collapse. So it hold network stability.

4 Likes

But him leaving the network would mean, assuming he is holding 500TB of data, that every other node would get 0,02 TB of data. So in earnings terms pretty much nothing. And if people are complaining about inflation and energy costs, then I think storing 20GB of data more wouldn’t help them.
And lets face it, he is running a true professional setup. This can’t be compared with some copy pasted Docker commands on RPi and reused second-hand drives, running on free internet and free electricity (and we know nothing is for free).
Him leaving would be a loss for the network, as he is invested into it and I assume won’t be pulling the plug even after the new pricing.
And I don’t believe there are many of such setups, even though I believe he isn’t by far the biggest one.

This is entirely a network problem. Either they make a normal network for a normal load, where there is no input for Raspberry Pi, or they reduce payments, and even Raspberry Pi does not pay off.

I don’t need to think about all sorts of Raspberry Pi and other under-computers like Odroid - their owners knew what they were buying.

Once again, simply and clearly, this is a network problem. The owners of the network have not decided, either they work on Pro-equipment, or on Raspberry Pi. Accordingly, this is what causes all the problems. You can’t sit on two chairs at the same time, you need to choose who your customers are and who your operators are.

Tomorrow, another spammer will register 20,000 nodes per hour, and you will already be discussing payments of $0.01 per terabyte.

But how would the number of nodes affect the payouts per TB? I don’t see that.
Edit: but I’m talking about the stored data, not egress. Yes, it might affect what you will get for the egress, but on the other hand we want the network to be fast.

@IsThisOn

Well, imagine I’m the farmer; I sell mangos for $45 a kilo now to Walmart. Walmart says I will now give you $10 per kilo for your mangos.

If I explain my costing structure to Walmart, then we can look at possible solutions. Maybe they buy 3x the mangos from me and that offsets my losses. Maybe they will see my costing structure and say, we will pay you $20 per kilo and cover shipping costs which makes you profitable.

It doesn’t hurt to provide information from our side to Storj, so they can make informed decisions knowing what our struggles will be around their new pricing. I don’t for a second feel like they will say, oh egress is a problem in this country, lets boost their egress to $10 a TB to compensate, but at least they will know how their proposed changes will affect us, and if more who live around me drop out, they will have some context as to why.

1 Like

It’s very simple - you have test data and synthetic data in the network, which are uploaded to the nodes. More nodes means more pool expenses. Today, the network is optimized for rapid growth, so they let everyone in indiscriminately, and they don’t fight with pools.

It was announced to us that the network will move to making a profit. The growth phase is over, well, the first thing to do is to start reducing the number of nodes, and the pools should be the first to go under the knife.

Once again, nothing prevents STORJ team from reducing the network to 100-200 of its nodes located on different continents. The team can do this with the pool’s money, and send all SNOs away.

I think that aggressively limiting nodes per “person” is a bad idea. If I am to run even a 20TB node that’s verysomewhat profitable, I am only one data corruption away from being DQ’d. And if that’s also because of drive failure, then I have no hardware as well. This makes whole setup go from at least self-sustainable to zero (or loss if failed early and/or drive is toasted). Having two nodes mitigates that a bit, but still seems too much work for close-to-zero profit.

Would be better to have multiple nodes running, so one failure means only some disturbance. It doesn’t have to be 10 or 20, but 4-5 seems reasonable. And all of that should be under one wallet to minimize payment fee. Then I can afford UPS to secure my setup from power loss induced errors. My nodes are safe, my efforts pay off, network is stable.

2 Likes

No network no salary for storage, dont forget that you are part of this network.

I don’t think so, where would they find a free electricity and free Internet connectivity? /s
But on the real note, we are (we should be) using what we already have and judging by the comments here, even that won’t cut it in many cases. I can’t imagine the expenses they would have to spend to spin up their own hardware, the logstics behind it, the man hours etc. And I guess it would have to be more than 200 nodes for them to be as fast as they are today.
But maybe that was the plan all along, to bootstrap the network with us and then to move to their own equipment. But I believe in the people here, so don’t think that is the case.

1 Like