Update Proposal for Storage Node Operators

Interestingly enough, Storj won’t be the first one to attempt to implement a P2P “home level” CDN network. I’ve encountered Youku’s (China’s Youtube) router several years ago, where they’ve tried to use the router as a local CDN node, and allowing the owners to take a revenue cut from it. Essentially utilising idling bandwidth for streaming cached videos and what not.

Edit 1:
With the potential introduction of video streaming tax, it might be interesting to see how suitable Storj would be to provide such service?

1 Like

Just to clarify, this is 2.7M STORJ tokens and it’s a total for “service provider” costs. I’m not sure this is the correct number to refer to as the explanation of this line is the following.

In addition to Storage Node Operator payments, we make payments to certain service providers (e.g., community leaders who monitor our various forums, respond to questions from users, and perform other community-related tasks; bug bounty participants; consultants; contractors) in STORJ token (line 11).

That doesn’t sound like edge services to me.
I’d be more interested in the 22.5M “other” line.

Line 14, “Other,” is reserved to report activity that doesn’t fall into any of the other categories, including, for example, non-routine payments to service providers and carbon offset program payments. As noted above, in Q4 ‘22, 22.5M STORJ tokens were used in payments that included the repurchase of company shares, and general operations and liquidity purposes. To provide additional liquidity for general operations during uncertain economic times and in periods of growth, we also are liquidating a portion of our reserves on non-US exchanges through a partnership, and these flows are disclosed in this line item.

source: https://www.storj.io/blog/storj-token-balances-and-flows-report-q4-2022

Your use cases miss native operating costs for running the satellites and repair processes. Both of which are quite costly and would remain even if none of the customers use edge services. Additionally, it misses costs related to giving a cut of income to channel partners who onboard customers to the network. This makes some suggested numbers look more reasonable than they are. I’m pretty sure there is currently no way to make it profitable for Storj Labs if they pay more than the highest numbers proposed in this thread. Even if we take edge services out of the equation.

But other than that, it’s a nice summary for people who aren’t yet aware of the basic cost structure! :+1:

1 Like

Just curious, what costs do you include in this?

Well, new payout amounts, as a standalone, provide little meaning in a sense of resulting revenue of an average node by the time those amounts are implemented (they do look scary though).
Does Storj expect growth of customer data on that much scale, so that it would balance out loss of revenue for node operators?
Overall, my goal as a node operator is not to lose (and to improve in perspective) resulting revenue amount. If it is possible even with new proposed payout amounts by leveraging other factors, then why not? All I want is a profitable business model on all levels and for all participants of project, and it is one of the most difficult tasks of Storj to design such model.

I see that the operators are not fully aware of the situation. They wrote to you in the first message - there will be not only a reduction in payments, but also a reduction in synthetic data! Please note - it is synthetic, not test (so they have synthetic data + test data + real data). And no one has said how much synthetic data is being poured into operators now.

It would be correct to start with the question: what percentage of synthetic data? I suspect that at least 90%, and maybe more.

Here a person writes that he has 250 TB of data, but it must be understood that with the reduction of synthetics, 2.5-20 Tb of these 250 TB may actually remain.

I think this is a distinction without a difference. What used to be referred to as test data hasn’t really been used for testing. So synthetic load is just a more accurate descriptor. I have no reason to believe this is something different from what we used to call test data.
Synthetic load is just more correct since this data was never solely there for testing purposes (also node operator incentive, space reservation etc). This means that the numbers posted earlier in this topic should still be accurate.

3 Likes

Perfectly! You have one point of view, I have another. Only the company itself can judge if it explains everything and provides evidence. Fortunately, we have a blockchain here, and you can double-check everything yourself. At least it can be uploaded to the dashboard dune.com and count with graphs and charts.

it would always make sense to have some level of test and synthetic data.

i duno how StorjLabs does that, however this is some basic of what i would expect.

Test data would most likely be required to verify everything is working correctly down to a byte level, since the customer data is encrypted StorjLabs would be working blind without some level of test data for integrity verification.

Synthetic data would be a good idea for stress testing and onboarding of larger customers, since it could be difficult to gauge how much remaining performance and capacity exists.

thus when onboarding large customers or running into hardware limitations, then synthetic data loads can be adjusted to ensure Storj DCS functions reliably.

ofc the ratios of how much synthetic / test data in relation of the actual customer data would change over time, as the network and customer data grows.
the avg DCS customer will have some very clear requirements, which is all that needs to be accounted for, so that the test / synthetic data gives StorjLabs a time buffer, so they can react to fix issues, such as doing surge payouts for SNOs to upgrade the network, or raise SNO payouts.

another factor would be that StorjLabs would want to stress test for extended periods, especially in the earlier ages of the network, so they learn how everything behaves.
just like you and i might attempt to break something after we just created it, to verify it works correctly and will last.

i don’t think test and synthetic will disappear, even if it might only end up being a few % of the entire network…

@jakesteele You guys keep looking at it from the wrong angle. Nobody cares what your costs are. I am sorry, but that is the hard truth. Just like you don’t care how much the farmer got for the apple you bought at Walmart. We have to look at what a customer is willing to pay, what price that would get us. Then after all that, you can ask yourself if your setup would be profitable or not. I don’t say “yeah, I can produce mangos for 40$ kilo, so I will sell mangos for 45$ per kilo”. That is not how it works.
If we look at it from your perspective and do it your way, we could leave prices for nodes where they are, charge customers 26$/TB for S3 egress, and STORJ would make 1$ profit. But why would any customer pay 26$ for egress, when they can get the same centralized product, even faster, with a better chance of survival, for only 10$ from Backblaze?

Me neither, just a wild guess.

That one STORJ needs to answer. Otherwise I will not setup my node no matter what the price is. It is by far the biggest spending. They have to be transparent about that.

You are right, and it even applies to both use cases. We can just hope that this is peanuts and costs basically nothing, otherwise the economics look even worse :grimacing:

1 Like

it is a bit more described in the details.

22.5M tokens for the repurchase of company shares and other payments including general operations and liquidity purposes not otherwise described above.

1 Like

I don’t think this is correct. Part of the reason they are making this proposal is to gauge how viable certain changes are for node operators. If a reasonable cost/income analysis of reasonable node setups shows they aren’t viable, that is important information for Storj to have. They can’t work without nodes, so they have a big interest in keeping reasonable node setups profitable.

Of course reasonable is a key word there. If your setup is unreasonably expensive, like the mango example you mention, Storj Labs will likely just ignore it.

I don’t think it will be. Satellites do a lot of work and store quite a bit of metadata. And repair processes also require to upload/download actual data. Luckily they have been moved to more affordable hosting, but you have cost for node payout as well as bandwidth cost for hosting the services and compute for erasure encoding/decoding. My guess is that these still have a fairly significant impact on costs. But I’d love to have better information about that…

Literally quoted that (and more) in my original post.

1 Like

Absolutely true! The number of nodes determines the network capacity and bandwidth. If there is about 10% of real data on the network, then the company can survive with 2000 nodes instead of 22200. They can change the model, and just have 200 nodes around the world, each for a few petabytes.

This is business, no one keeps us here. Plus, there will always be those who have zero cost of electricity and even hard drives (disposed from data centers), who can keep large nodes for years just for the sake of interest.

That is why my guess was:
other == enrich management
service provider == gateways

Then it will be the final death blow for the project and explain, why it seems that management is cashing out. Did they give up already?

Yep, that is me, soon to have 30TB laying around at no extra costs. But I will not bother to set that up, if I think STORJ is gone in one year. That really depends on how they will handle the twitter spaces. I have a pretty good nose if someone is really trying to make something work or just saving his own ass.

Now there is a period when large companies are decommissioning 2017-2019 hard drives, volumes are already over 10 TB there, and it is possible to build reliable arrays of the RAID60 level with a volume of hundreds of terabytes.

Some Vendor-locked models like HGST, which are not flashed, do not cost anything at all. I have already written - the cost of data storage is well, if not zero, then very low, far from $ 1 per Tb. Now you can buy used hard drives at a price of $ 5-7 per terabyte, those who sell them buy “by weight”, “tons”, “vans”. They generally have an incoming price of about 0.

And if you build an array on old 1-TB disks that people just throw away without even trying to sell, and you collect 200-300 pieces of them, then you will have a free array at all. It remains only to put it in a place with free electricity, and there are plenty of such places!

Before the Chia mining boom, a decommissioned 24-disk LFF shelf cost about $ 50 per purchase, at the entrance about $ 10. A good cable cost more than a disk shelf with two expanders and two power supplies.

Yes, it is possible without disk shelves at all, as with Chia miners - just connect hard drives directly to the controller (LSI 9264 costs about $ 20 together with BBU), plug 4-5 such controllers into the server, or connect 100-200 disks to one server through an expander ($30). Hobby is one thing, industrial token mining is another.

It’s one thing when a company needs to show business growth - for investors or journalists, and quite another thing when a company needs to make money. We were in the first phase, now we are moving into the second, and we must be aware that if you do not know how to save money, if you have expensive electricity, there is no way to buy old decommissioned hard drives, then most likely you will be left behind.

And that is why you should not invest in hardware and only use unused resources (maybe even electrify). You can’t compete with the big guys :grin:

Chia is still around and thats basically just a green wave hype train for the ignorant.
and StorjLabs has a business to run… they can’t pay SNOs more than they charge their customers.

I’m not happy with the proposal and believe something will have to give for it to make sense for most SNOs, but i’m fairly confident that i will still be running storagenodes in a year…
been buying 18TB HDD’s for over a year now, so i’m fairly confident i can make it work.

tho most with smaller disk models will be pretty much screwed, which will be bad for the network…

we also knew this was coming for a long time, ever since SNOs convinced StorjLabs to reduce their customer prices to the levels of Backblaze / backup tier services, there was bound to be a reckoning.

1 Like

Predicting the future by extrapolating datapoints from the past is not always a good idea.

and even with these “low prices” they still did not manage onboard loads of new customers :grin:

The disconnect between the real world and SNO is so fascinating to watch…

Fully agree too!
I join this statement, especially about the loss of files. The loss of some of the files from 4% of the audit immediately leads to the loss of the remaining 96%, clients will not receive them, although they could still use them.

Due to 4%, the client loses all the other 96% Example: Проблема: Your node has been disqualified, но audit score 96%

My suggestion for disqualification is not to completely disable the satellite, but to partially penalize the payment and recovery time so that the client does not lose his other ~ 96%

sounds more like the basis of science to me…

thats interesting, because affording to StorjLabs the amount of customers and network usage has been going up with no signs of stopping…
it was said in many Townhalls.

also it wouldn’t matter how many customers / service usage a company onboarded, if they pay more for the services, it would only make the issue need to be resolved quicker.

Storj DCS is barely a two year old product, and StorjLabs being a new company trying to sell their first product, its not surprising its a slow climb, especially since they are among the very first to market with this sort of product.

and unlike many other crypto projects, StorjLabs actually have a marketable product, which doesn’t just rely on some kind of virtual economy that is based upon being able to make money in the future.

their product works, and it earns… today
do they need to readjust for future growth yes… but thats to be expected, until now their priority has been building the business using their runway…

but runways run out eventually…
afaik storj is one of the more viable crypto companies, and its not easy to make money on leasing out computer hardware.

also the internet storage needs are growing at an incredible rate and is still only accelerating, StorjLabs is very well positioned to become a major player in online storage.

also it was a proposal… now all we can do is wait and see what happens next… given the near 100% negative feedback, i assume we will see another proposal when StorjLabs figure out how to meet expectations from all sides.

also if you think the project is dead, why are you even still here… i guess you could be grieving :smiley: the old payment structure was pretty nice, i’m also sad to see it go.

6 Likes

I’ll try to clarify this one more time. It is not feasible for satellites to audit all data on a node. It’d be too expensive. So the problem is that there is no way for the satellite to know which data is and isn’t lost. Instead, all the satellite can do is audit nodes and determine whether they reliably store and provide the correct data. Because this determination is made on a per node level, it is not possible to repair only what is lost, because the satellite doesn’t know what is lost.
And since nodes are inherently untrusted, the satellite also can’t trust when a node says “these are the pieces I lost”. A feature like that would open the door to massive exploitation by malicious node operators.

9 Likes