My suggestion for improving the economics of the platform:
Allow for node operators to “bid” the price they charge for storage and egress, within a range set by storj.
This bid is used in addition to speed to determine which nodes are chosen to store or egress data (not just first come, first serve).
Some additional points/critiques:
Operators able to scale can do so in a way that makes storj more competitive, and are given more data to do so. Currently they have to resort to hack-ish ways to make sure all their devices are behind different /24 networks.
The network economics could automatically adapt to the way storj is being used (low bandwidth usage? Lower prices would incentivize customers to add more high egress data. High disk usage? High storage prices give operators capital necessary to expand)
This idea is essentially sacrificing some decentralization and performance (of which you could argue, Storj has “surplus” amounts, more than most customers really need) for economic competitiveness. Algorithms/price ranges would have to be carefully tuned to maintain intended decentralization.
Backblaze is listed on all cloud storage comparison lists. They are a much broader known name. They also don’t ask their customers to trust in a whole new kind of storage network and have more certifications because they operate in an environment these certifications are made for. Storj DCS in its current form has only been around for 2 years, backblaze has been around since 2007.
There are barriers of entry that won’t just disappear. You need to break those down slowly as they are mostly barriers in perception. It takes time. It takes more and bigger customers trusting their data to Storj. And public trust growing from that.
It doesn’t, sure the S3 gateway is there as an option, but you do realize you can use uplink to pull data directly from nodes or libuplink to integrate that functionality directly into your app?
There seems to be a misconception here that Storj Labs only markets to web3 customers. This is far from what is actually happening. In fact, most of Storj´s customers are web2 companies who are not interested in the blockchain space at all but like the resilience, performance and security that Storj DCS offers (besides the attractive price point).
Web3 companies only make up a small part of the total customer base, and they are certainly not the main focus in our marketing and sales team´s efforts to sign new customers. However, there is nothing wrong with diversifying the mix of clients and not exclusively marketing to web2 customers, when there are good use cases for Storj in the web3 space, too.
I would be interested to see what ¨web3 buzzword PR¨ you have specifically encountered and where, that may have brought you to the conclusion that web3 was Storj´s only focus when hunting for customers?
I can say that as a European customer (more specifically Germany) my hard drive alone costs me 4€/month when spinning. I am dedicating 5TB of my 10TB hard drive for storj. If I start earning less than 4€/month from Storj, I will definitely immediately stop my node since I can then spin down my drive more often.
We have nodes in 100+ countries, each have own regulations regarding fiat and also more strict regulations regarding foreign currencies. In some countries it even illegal. So, fiat is not an option. Just imagine to support 100+ regulations formulars multiplied to number of operators, it requires a lot of paperwork and related problems, even if we do not include different tax requirements for each of them and regulations between organization and natural persons (in some cases you cannot be a natural person to receive a foreign currency and must be at least IE or LLC).
Even in USA you need to conclude a contract, verified and signed and also provide your tax report for IRS. And by the way, it’s required for token payments too. New Guidelines for Storage Node Operators in the United States
Do you have any evidence that this scares people away? Especially in the context of the screenshot you posted, the promise is enterprise performance, durability and security.
I think web3 in itself is a term that is ill defined to begin with. The term may not be as useful. But customers should be aware of the decentralized nature of Storj. I don’t know that the web3 term scares people away who wouldn’t be scared away by knowing data is stored decentralized.
I understand your position. I myself considered Storj as a web3 fad before accidentally stumbling on an actually good in-depth explanation on Reddit. One note though,
Funny cat pictures aren’t in tens or hundreds of gigabytes and don’t change every few seconds, especially not in a way where a simple HTTP Range request would just fetch some new bytes at the end of the file. The closest is probably game updates distributed as large patches all over game content, and these are tailored to specific types of content as well.
You may be interested to see this webinar we recently held with participation of three of our web2 partners explaining why they are excited to use Storj DCS, a web3 company, to store their data rather than one of the big established web2 companies.
Having said that, we would really appreciate it if we could get back to the original discussion subject of this thread (the Storj economic model).
I see that there is possible to reduce your processing fees, by outsourcing auditing and reconstruction of files, to special nodes with contract, I would even agree, that storj inspect this pc with remote connection, as i have dedicated machins for storj. This trafic can be some 5$ tb.
As i know Storj use Hetznet for 20$ for TB of Egress?
it also will be good if storj publish some operational cost for runing network, i mean servers, auditing, recreation of files when it drop below threadshold. As i understan this costs are much bigger that those that you pay to storagenode operators. And it is better to optimise them first, and make them possible to make additional profit to storagenode operators than paying them to server companies.
That question did not receive any reaction so far.
So maybe an additional thought on that: I believe the current model is in favor of egress than pure storage. This could mean that SNOs are incentivized to get rid of data that does not produce (much) egress like backups. This could mean that if you run a node that is full and cannot be expanded for some reasons, you could be tempted to rather do a graceful exit (which additionally returns your held amount) and start all over to be able to receive more valuable egress pieces again than keeping your existing node online. This does not sound healthy for the network altogether.
@BrightSilence
I have no data on this but I believe you could tell. Is this something we see? That a full node has a sharp drop in earnings as no new egress pieces will land on that node?
Yes, I believe so. But by far the biggest impact is whether you still receive ingress. About 15% of ingress gets egressed in the same month. The moment the node fills up, that stops. Then on top of that you see about 6.5% egress on statically stored data. Now I’m sure that average still has some preference for newer data, but most my nodes show that older data still gets accessed. at maybe around 3% egress vs stored.
Some of this is inconsistent egress on test data, we’ve seen egress drop to basically 0 on the test satellites from time to time. Storj tries to replicate average egress, but I guess that process gets interrupted some time.
More importantly though, it’s just kind of natural that the older data gets, the less it gets used. This does lead to a slight inverse incentive. But in my opinion not enough to be worth it to start a new node and exit the old one. The time investment required to fill that back up would negate any gains you would get from that. If possible, just ensure you always have free space so you never miss out on that juicy new data.
Ps. The split between egress on static storage and egress on ingress has been part of the earnings estimator for a while, so you can see the drop effect if you set it to for example 5TB.
Tying that back to the economic model. It would be good if Storj incentivized storing data for longer. Instead of paying a static amount per TB, why not increase that amount slowly over time, so that more loyal long term nodes get rewarded for reliably storing data long term.
I thought about that too. But simply node age is not a good measure I think. it might be to broad and too easy to gamble. An additional reward for long term storage should be nailed down to the age of the pieces that a node is holding.
My baseline is, a SNO should not have to even think about it, what kind of piece has more value for him.
Oh yeah, agreed. That’s what I meant to suggest in the first place. New pieces should still see more egress, so no need to boost static storage income on those anyway. This would really help prevent SNO’s from being tempted to somehow remove old data or exit satellites.
Taking this into account, maybe even $1/TB for storage for new pieces would work, if it goes up to $1.5 for old pieces over time.
Might be worth to think about something like that.
I would prefer that we have a separation of concerns, i.e.
SNOs add value by storing data → incur the cost of storing data → get paid for storing data
SNOs add value by sending data → incur the cost of sending data → get paid for sending data
The cost and the value added for each activity then pass on to customers, separately. That way, the incentives of SNOs and value of service to customers can be well aligned. That’s the ideal.
As an aside, long-time rarely used data sounds like a cold storage. Cold storage is usually less expensive to store but more expensive to retrieve.