Update Proposal for Storage Node Operators

Looks like a bug to me. I will escalate that one. Thank you for letting us know.

5 Likes

If anyone is wondering whos making all the money running storagenodes you can kinda see why storj wants to cut prices back…Not really sure how maybe storj needs to audit and make sure theres not some kinda exploit going on.


2 Likes

I read almost everything that is written here, cheers :slight_smile:
First of all, thanks to everyone for the discussion and to the STORJ management for trying to agree on future prices.

Since I don’t understand who needs such volumes from business clients and why they shouldn’t take a promotional offer from hetzner … I came to the conclusion for myself that I need to ask to immediately completely disable test traffic and publicly give information about the amount of test volume stored on the nodes .
Everyone writes about clients, everyone writes about tariffs, but we all don’t understand (we guess / interpret in our own way) how much space and what channel bandwidth is really needed. Any value will be a “spherical horse in a vacuum” if it is synthetic data.

My suggestion:

  1. immediately completely disable synthetic traffic and data (Synthetics is misleading and simply loads the Internet, disks and SNO - gives the illusion of growth)
  2. add information for SNO about how much test data they have on the node in order to understand whether it is necessary to expand or, if necessary, the test data will be deleted and replaced with useful information
  3. Publish information about the amount of test data in the network - a general report.

Then we will all understand whether we are doing something useful, our disks are occupied by millions of files just for the sake of temporary income, or is it a real benefit and real self-employment in order to generate income for which you can invest, suffer, etc.

4 Likes

The satellites for test data are US-2, Europe-North and Saltlake:
https://storjstats.info/

4 Likes

They already can run their own gateway, yes. But I didn’t necessarily mean run it themselves, but just have them pay for it separately. Storj can maybe negotiate some volume discounts for their customers to make it cheaper than when customers have to run it in the cloud themselves. But in some scenarios, customers can run a gateway on premise for free. But why would they if they could use the hosted gateway for free as well.

Careful what you wish for. This tends to lead to a race to the bottom in prices. You’re competing against people who run on hardware they have online already. On my NAS I would keep dropping the prices until I get data. To be honest, I don’t think that model would do node operators any favors.

I’m running both internal nodes on always online hardware as well as external HDD’s that only run for Storj. So with my internal nodes I’m in the same boat as you. My question is not so much whether I would exit those… I doubt I would. But whether I could still afford to expand when they fill up. I’m not sure that is still reasonable. Maybe at the $1/$5/$5 payouts, but at the lower limit of this suggestion, I don’t think it would make financial sense to expand anymore.

US2 hosts almost no data though. So I’m sure it’s possible to do it for that one. Might also be a bug as littleskunk mentioned.

The downside to this would be a significant drop in payouts for the most loyal long time nodes. While I agree in principle, that may not be very good optics right before a payout drop.

This is data from my oldest node for this month so far:


As you can see, test satellites (europe-north, saltlake and less important us2) still account for a significant chunk of payout on that node. Deleting all that data instantly causes a massive drop in payouts. It’s better to do it gradually as suggested in the top post.

As for your point 2 and 3. I still have no reason to believe the split in satellites isn’t a good indication of how much test data is on the network.

2 Likes

I know which satellites are for test, in which folders they are. But I’m not sure that it is “useful” information that comes from other satellites, and not synthetic for checking, accumulating / reserving space, creating an illusion, etc.
I know that you can exit the test satellites, you can delete folders, see their size yourself - but this is all guesswork.

Also guesses that the test data can be proportionally deleted and filled with useful information. We think so, I think it’s possible - but…

I proposed to completely and immediately abandon test traffic, synthetic information in order to discuss prices later.
Now we are saying what we would like to receive, talking about profitability and trying to find a balance so that everything would be profitable BUT it’s all pointless if we store the test, drive the test over the network and continue to pay for it from reserves, and not from customers!

We’re discussing an eventual solution to Storj losing money on every byte stored. There was always going to be a transition towards that solution. I think gradually getting rid of test data as well as gradually lowering the payouts is the right way to go, so the drop won’t be a sudden shock to node operators, causing a significant amount to leave at the same time.

That doesn’t mean we shouldn’t get rid of test data, it just means it should be done in a thoughtful manner. As to not cause more harm than good.

5 Likes

I hope you leave us to scale easy if you cut price. For example 10 nodes per IP.

4 Likes

Care to expand?

That, at the current stage with decent ingress, would lead to a black market for nodes.

That’s pretty cheap at scale. It doesn’t take a lot of manpower to operate a big cluster if all nodes are configured in exactly the same way, automation saves a lot of effort.

There would be some benefit to that: customer would initiate only a single connection and would not be required to spend bandwidth on the expansion factor. I’d think this would be pretty useful to some customers.

In my case these amount to around 1% of the transfer value, so not exactly a significant amount. I can understand that some SNOs have higher taxes though.

You probably have missed my post above.

I disagree on the fact that Storj would have no advantages here. This is a viable path. Not going to discuss it in this thread though.

I expect this session to be pretty heated. I wish I could join real time.

Based on my observations, my HDDs could cope with 3x-4x of the current traffic. I did spend a lot of time tuning the setup, though. And I could probably tune it even further with some changes to storage node code… I’d be breaking T&C though.

The companies that would choose Storj over Hetzner would probably do so because of disaster recovery (the offer I linked is non-redundant storage, fine for nodes, but not for durable storage without additional, pretty significant setup) or latency (not everyone is in Germany or Finland).

1 Like

That’s a fair point, but it doesn’t solve the trust issue either. The customer is entrusting whoever does the upload for them that a file is correctly stored on nodes. Storagenodes are inherently untrusted entities, so they couldn’t take care of that. And if you have to verify afterwards, that then introduces new costs and overhead again. It’s not an easy problem to solve.

Based on previous attendance I’m not convinced. I’ve yet to see anyone apart from me mention they intend to join. But we’ll see.

2 Likes

So let me try to understand.

Storj is struggling to make the economic model that Storj created work, so Storj intends to solve the problem by taking the money from the SNOs that built the network Storj relies on to function?

Storj then added (multiple) edge services, despite many of us having symmetric gigabit fiber, or better. Today you are telling the SNOs that these same edge services cost so much that you need to reduce our compensation?

Storj proposes to do this by forcing loyal SNOs, individuals that have supported your network for years, into unprofitability right away. However, you will employ “some sort of surge payout” so SNOs don’t recognize the inevitable until you’ve found some way to (presumably) centralize the Storj network more so that it does not collapse as SNOs abandon the network, as you have abandoned them. This may not help the STORJ token price, part of the implied reason for the change(s).

My feedback is that this may negatively affect the relationship you have built with your SNOs, and that you might want to find a solution that is more balanced. Quite simply, in the 36 hours since I read this, I have been struggling to understand why I should keep my ~20TB of data with Storj, and not just fill it with data I personally care more about.

4 Likes

Here are my numbers, at the moment, I’m using 8Tb drives, at 92% capacity, so I’ll like to upgrade to 20Tb soon.

I’m not taking into account:

  • compute
  • UPS
  • internet connections
  • /24 limit
  • cooling
  • man-hours (setup & maintenance)
  • vetting+time needed to fill the drives
  • or any other related services

So basically bare minimum running costs, with the cheapest drives that I could find

Current Proposal New drives 4Tb drive
HDD price ($) 150 150 360 100
DiskSpace (Tb) 8 8 20 4
Tb/mo ($) 1.5 1 1 1
Egress ($) 20 5 5 5
Egress to DiskSpace (%) 10 10 10 10
Power usage (W) 10 10 10 10
Electricity price/kWh ($) 0.6 0.6 0.6 0.6
Electricity cost ($) 4.32 4.32 4.32 4.32
Profit ($) 23.68 7.68 25.68 1.68
Profit $/Tb 2.96 0.96 1.28 0.42
ROI in months 6.33 19.53 14.02 59.52

This means that newer or smaller nodes don’t have a chance, so even if the signup is still open, it’s not worth it any more.

I really hope that storj finds a way to be profitable cuz I wanna get those 20Tb drives :slight_smile:

5 Likes

The traffic with or without edge service is the same for storage nodes. Edge services work like proxies. The customer requests a download from edge services and edge services will download it from the storage nodes instead of the customer downloading it directly from them. From a storage node perspective, the result is the same.

5 Likes

That makes sense, thank you for clarifying it for me. I’ve modified my original post to align with this.

Does this mean that the edge services do not cache any data that is frequently accessed?

That would require a lot of local storage and as of right now edge service wouldn’t charge the customer. The current accounting system requires storage node to submit orders back to the satellite. With caching in place there will be no order to submit.

Understood. Thank you again.

The real bottom line here is that nobody will run nodes at these prices. Money makes the world go round (so to speak) and if there’s no money in it for SNO’s, Storj’s world stops “going round”. People will do things that are profitable, even if it’s small… but doing things to essentially break even is no more than a hobby… and trust me, people get bored of hobbies.

Besides that, there isn’t enough “extra” storage space in the world to support what Storj is trying to do here. And even with the extra that IS out there, how many of those people know about Stroj? And how many of them have even basic enough skills to run a node? And of those, how many will essentially do it for free? I mean c’mon… Who in their right mind will want to shorten the lifespan of their drives for basically no reason? But for those that do, the first time their drive fails and they loose their personal data or simply have to pay for a new drive, they’re going to think twice about doing it again. People do this because it’s profitable, plain and simple. No profit, no Storj. Furthermore, things have continuously been moving to the cloud, people are moving to smaller and smaller devices such as laptops and tablets instead of desk PC’s… does Storj really think this will change? All of a sudden everybodys going to go back to desktop PC’s that are on all the time and have all sorts of “extra” space to give away for next to nothing?

Decentralized storage is a great idea, but the truth is Storj doesn’t charge enough to begin with to account for a “middleman” (Storj) taking a cut especially with a 2.8 expansion factor. The proposed pricing will only really be somewhat profitable for very large SNO’s. I don’t believe large SNO’s are bad though because let’s face it… if Storj wants to grow to compete with other large providers they will need large SNO’s. However, due to the /24 limitations this is obviously not what their going for. Now the whole “don’t buy any hardware for Storj” thing is nothing more than a loosly worded legal disclaimer so people don’t try to blame Storj if things go south… like if someone can’t do math… or if Storj fails… or a rug pull… Now I’d hate to think something like that last one, but the way it’s starting to look with all these things considered it kida looks like Storj want’s to eventually phase out SNO’s for a more centralized model. This is of course my opinion, but I simply can’t see any other way for Storj to be profitable. As I’ve stated in another post, even if SNO’s gave away their space for free it would not make Storj profitable… not for a good while anyway, and that sure isn’t going to happen.

I really don’t understand the direction Storj is going. To many things just don’t make sense. I won’t go into any more detail about that here as I have in other threads. Besides, it appears many have brought up similar things throughout this thread. Some will understand what I mean and some won’t. It saddens me to say, but it kinda feels like Storj doesn’t really want us anymore. I mean how did they think we would all react? I was obviously expecting cuts, but this? All I have to say is… lol.

5 Likes

See: Lyft, Uber, and virtually every other “sharing economy” start-up.

1 Like

Originally, Storj started with the idea that distributing data to multiple nodes makes it cheaper to store it, because nodes usually use cheaper home internet connections and cheaper hardware. However, due to erasure coding, audits etc, the data can be just as reliable or more reliable than if it was stored in a datacenter somewhere.

The idea was that customers use the uplink program (or some module) to access the data, getting many transfers in parallel, so, high speed but cheap. Ads for Storj also talk about built-in end-to-end encryption and the decentralization and higher reliability because of that.

However, it seems that customers are not interested in end-to-end encryption, parallel transfers or anything like that, they just want S3 compatible storage without having to run anything locally (there is a local S3 gateway the customer can run). Since this is centralized and cannot just be run on random nodes, Storj needs to pay more for these servers, but customers are not charged more for the use of those services (in this case why would they bother setting up their own gateway, especially if the built-in end-to-end encryption is not needed?).

So, what we get in the end is almost the same as a regular datacenter S3 service - Amazon or whatever. Storj is probably going to just start up a few hundred of internal “nodes” and shift the data to them, completely transitioning to being almost exactly like Amazon, but less efficient and in some cases slower (time to first byte etc).

6 Likes