Update Proposal for Storage Node Operators

What you’re saying is particulars, they don’t matter much. The company can find a balance to respect the interests of operators and the network. Now there is no such balance, so the network is unprofitable.

Let them start working in this direction. We wrote them enough about how to do business - they closed the relevant topic, and instead of discussing it, they rolled out a proposal to reduce payments.

Well, I offer them another solution - if they don’t want to listen to how to do business, then let them at least listen to how to get rid of the oversaturation of the network with an offer without reducing payments.

A very smart and very timely thought

So this can be done by one provider with 10-20000 subscribers and in each of the subnets there is at least one holder of one node. What’s wrong with this? Completely normal occurrence. If it could be determined that this is an VPS, then yes, but if this is a physical connection without an VPS, then it’s fine.

Ok I heard something about this but didn’t realize SNOs were apparently responsible. Sorry, I didn’t follow Storj as closely in the past. Clearly this was just an attempt by SNOs (many of which don’t understand how business works) to get more data on the network without fully thinking things through, but the fact that Storj actually gave into this is absolutely absurd.

Again sorry to say but this just makes it sound like these people don’t really understand what their doing or how to run a business especially looking at the situation we’re in now. And who wants to actually be the “cheapest” option anyway? Everyone knows, you get what you pay for… to a large degree anyway. Don’t shoot yourself in the foot by being known as the cheapest. And Storj in many regards is a better product than Backblaze anyway. So just… why?

You want to know who want’s cheap? Everyday consumers that don’t have tons of extra cash. But Storj doesn’t target that market, they want to target businesses. That’s fine, but it’s businesses that are paying the higher prices for products. Oh, you think your average Joe is paying $24/TB when they can get it for $5? Generally businesses will not go with the cheapest options available. In fact many, especially large ones, will literally overpay for things simply because they THINK it’s better. That and being part of the big club helps too, but we won’t get into how that works here.

1 Like

Yeah, but you can’t just cherrypick the stuff you like and ignore the stuff you don’t like. You can extrapolate the amount of data you will get, based on your past experience. Many unknown factors and not exact science. Fine. But do you extrapolate the shrinking rewards for you as a node? No. That is because you cherrypick.

Wrong on many levels. First of all, I am not sure if it is dead yet. Never said that. I will make my guess after the twitter space. On my guess will be based on how they handle the though questions. My guess is that they don’t even address them, because they know it does not look good. But dodging a question is also some kind of answer… I have written my questions a simple as possible in the other thread so there is no wiggle room and no excuses.

If they fail to answer them, I have made up my mind about STORJ.
All depends on management now. Second, even if I think it is dead, it is fun to hang around and watch the cult like behavior. Probably to this day you can find people in the celsius subbredit that think they will get back all their money. I just think that is extremely interesting to watch. The tech and economics are also very interesting. I like to discuss them in the forum. Third am not grieving. I was skeptical of STORJ because I dislike most of the cryptospace. But to my surprise, here was finally a product and an actual business case. I had a external HDD around and wanted to test it. Was very interesting and I learned a lot. HDD died and I did not bother to setup something new yet. Next month I get a 144TB TrueNAS system and have at least 30TB I don’t need. I am mostly interested in how special vdevs can speed up the filewalker process.

If you loose more than 4% of your data, something is very wrong with your node and you should get disqualified.

First off, you never sold to Walmart, you sold to your parents and neighbors who just wanted to support you in the beginning. Now you go to Walmart. Walmart would say “sorry pal, we have another farmer that sells to us for 5$. And even if we did not have another farmer, none of our customers would pay 50$ (5$ for us) for a kilo mango. Have a nice day”

I don’t fully understand your message. Who is provider? Who is subscriber? Physical connection but to where?

My point was that if all those IPs lead to one location in one data center, then that’s “not ok”. If there’s /24 limitation in place, then it’s more tedious for such person to collect those different IPs, but still doable as that person is basically industrial-grade. While on the other hand if I want to run two nodes, one at my place, second at my friend’s place at different location within the same city, we still may end up being on the same subnet if we are under the same ISP. Fair? Doesn’t seem so.

Honestly I think it will make the issue worse since large operators will be the only ones still making money. The little guys simply won’t be profitable. And I’m sorry guys, but people are not going to run nodes as a charity, it just won’t happen.

Example, my provider has about 15,000 subscribers, this provider has more than 60 subnets 255 I have many friends who know about Nodes and our IP in different subnets sometimes gets 2 nodes in the same subnet. But the fact itself is a physical connection. And I have a friend who connected 100+ Nods via VPS to 100 Mbps, he himself has a gray IP, this is not normal. That is, you need to determine which VPS connection or direct physical connection from the provider. Although it is probably not possible to determine the type of connection.

1 provider - 15,000 subscribers and only have 60 subnets, this is a maximum of 60 nodes. VPS allows you to do this without restrictions - which is bad.

The solution is to ban the use of VPS. But is it possible?

Dear STORJ team, here is my proposal:

KYC for all SNOs,
One person = 1 SNO = 1 wallet.
No more pools, no more /24 restriction.
No more 1 HDD = 1 Node, if you have several HDD go and buy RAID card or set up LVM/ZFS

Make STORJ great again!

Why do people keep associating the network cost with the number of nodes? Storj pays for the space filled and bandwidth used, not the number of nodes there are. And all the talk about the network being too big? Are you kidding me? So there’s close to 30 PB free, so what? Storj doesn’t pay for space that’s not being used. STFU about the network being to big already, it’s meaningless. Besides, how is Storj supposed to onboard any potentially big customers if there no f***ing free space available to fill? All the talk around this is meaningless.

4 Likes

RAID is more dangerous point of failure then 4-5 independent nodes.

4 Likes

Really? And banks and large companies do not know. Everyone somehow uses RAID arrays, and the idea of “1 disk - 1 application” is not even considered.

But in any case, a pool, whether it is built on a RAID or on 1 disk per node, is always by itself a single point of failure for 100 network elements. The operator can get to the hospital, go on a binge, die in a war, from a natural disaster. He may just get bored of everything, and he will disconnect the cluster and sell off the equipment.

Also you can scale the number of nodes (1 drive - 1 node) much easier than with raid in terms of iops since each drive would handle its own iops instead of a pool of drives handling a collective load of the entire /24 on itself and not caring about making copies of data within itself for recovery purposes.

At least that’s my understanding. However, this is very off topic

1 Like

Hard drives scale perfectly both horizontally and vertically. If you want, buy 1 HDD for 18 TB, if you don’t want, buy 18 HDD for 1 TB. This is not the point,

Nothing prevents developers from optimizing the node code for large storage of tens of TB, so that you don’t have to start a new node when the old one stops filling up. Similarly, nothing prevents adding support for multiple disks for a single node. These issues should not be solved by us, but by developers.

The bottom line is that the pools are clogged with test and synthetic data, for which the network pays.

RAID concept and Storj concept are different, RAID is outdated not needed for storj.
RAID and Storj have very similar, raid is local concept, storj already is decentralized and has integrated redundancy so you will make redundancy for redundant system by defauld.

1 Like

Storaxa on Kickstarter I presume

I don’t see how any chip would die from 100% load. They are designed to work this way. Also, I don’t see how 1-2 disk setups with RPi would cause it to load to 100% of CPU and RAM.

1 Like

Different usecase.

If you have array of 10 disks and 10 storage nodes on it and you “lose” 2 disks — you lose 10 storage nodes.

If you have 10 separate disks with a storagenode on each and you lose 2 disks — no biggie. You lost 2 storage nodes. Who cares. Network is redundant, erasure coded etc

This approach, however, contradicts the mantra to use spare capacity; in which case I don’t see how your storage node will end up on anything but raid array… but that’s a separate discussion.

3 Likes

Raspberry pi is a prototyping device. It’s not mean to be used in production. (People do it anyway and it’s amusing to watch, but it does not change anything).

And if you do make all the precautions to try to make it reliable — actual specced PSU, heatsinks, shielding, ESD protection, etc — you might as well buy an industrial mini pc, with upgradable ram, instead of tying up prototyping hardware in substandard solution it was never intended to be use in.

Well, however you describe it, it is still able to withstand 100% load, and 2 disks won’t load it to 100% all the same.
It is just interesting how people (not someone specific, just an overall observation) get all high and mighty towards RPi setups with reasoning that it will choke from pro setup leaving network, while the problem lies with them abusing data distribution mechanics and creating an unnesessary point of failure, however small.

2 Likes

If you search here a litle, you will find several posts that some rpi nodes are overloaded already.
when you have big ingress and egress on one node, USB on rpi cant transmit lot of small files so fast, then rising ram usage significantly.