Storj and competitors

So, before you all start to spit on me for posting this here, I want you to know that I have a substantial investment in Storj and thus interest to succeed. I’ve come across decentralized cloud storage comparison on Reddit (this article), and hoping to get some real answers here.

Although the article seems to be biased I am not interested in making attacks, but I am very interested in the spreadsheet provided as it seems to be thoroughly put together and factual, tested by the author.

This is the spreadsheet storage comparison.

So storage node operators, is there any counterweight you can put on what has been presented here? I am genuinely interested to understand bits and pieces that form different decentralized storage systems.

Hoping for a valuable discussion.

Welcome @tiha.

I have been a SNO since December and find the article to be reasonable, although I can’t compare the different offerings other than Storj and FileCoin.

As long as Storj has a customer base, I think it is a reasonable platform for SNOs, although the uptime requirements will be the hardest requirement to satisfy.

I originally looked at IPFS and then FileCoin last year, but the support was miserable at the time and I could make no headway as a miner. Since then I jumped in about a month ago to see if they made progress and I am currently running a single miner. The entry costs are absurd (it is really geared toward large commercial miners, mostly in China) with a base rig running around $5k and easily growing to $20k-$50k. Their two biggest risks are: unknown market with unknown ROI; and a business model that is risky at best (I need to put up how much escrow and risk losing it due to a technical problem??? We see this right now causing massive headaches on the calibration network. Storj has an escrow mechanism as well, but not advance escrow like FC, rather your payments are held back until your reliability is well proven.

1 Like
  1. Sia requires one to pre-fund the node wallet with SIA in order to advertise the node on the network and to process data storing contracts. So, Sia is not as easy to setup and start running as Storj. My guess is that Storj will see much greater adoption among willing node operators than Sia due to the complexity of setting up a Sia node as well as the requirement to pre-fund a node… AND pay Ethereum transaction fees which as sky high at the moment. [^Oops!] In this way, Storj is in much better position than Sia because Storj does not depend on the blockchain for the product to function. Payments to Storj node operators can be modified or collected over time… making the rise in Ethereum network access fees much less of an issue.

  2. Filecoin is still in Beta or maybe Alpha as far as I remember… The Filecoin network works with IPFS, but not as a component of IPFS. My guess is that long term, Filecoin or something like it will supersede Storj… however, there’s a Long road to travel to get Filecoin to the same developmental milestone as Storj is already.

  3. SWARM is interesting. However, Swarm requires a significant investment to run a master node.

Pretty much, Storj is the only actual option for regular tech people to start running with right now. And it’s getting better with each release. The last 12 months have been amazing. Frankly, I was curious at the beginning, but suspicious… Now I am excited to keep my node running and looking forward to a bright future for the Storj network which will probably run well alongside Filecoin for the next decade.

[^Oops]: I forgot that Sia runs its own blockchain. The Ethereum access fees still apply if one is starting up a Sia node and/or moving/buying SIA coins through crypto exchanges.


Honestly, the premise of the article is kind of wrong. I wouldn’t say decentralized storage platforms are the competition, but storage platforms in general. It’s probably wise for decentralized platforms to market the ideas to a broader audience together.

That said, I have looked into running a node on all the mentioned platforms, with the exception of 0Chain. For all of them the upfront costs is what sent me looking elsewhere, either for high hardware requirements or for buying in. Storj has a very low barrier of entrance compared to the other platforms. The software can run great on a Raspberry Pi 4 and there is no upfront buy in at all. So you can just test the waters.

As for the customer side, yes, Storj/Tardigrade relies on somewhat centralized satellites (the network will support community hosted satellites in the future). But the trade-off is that it’s the only high performance, SLA backed decentralized option right now. Additionally the low barrier to entry for node operators also means the network consists of many smaller nodes compared to some of the other platforms, which due to the high requirements and upfront payment often have fewer and larger nodes. This arguably means that the data itself is in some ways more distributed.

As an active community member I can say there is a lot of active development going on and the product is getting better and better. Recent months have been especially good for SNOs, with great updates to the web dashboards for nodes with more information. All code is open source and there is a public roadmap available. I don’t really understand the complaints about lack of transparency. In fact I find them very accessible here on the forums and willing to share what they are working on as well as incorporating community feedback.

I became a SNO (back then called a farmer) while v2 was still live. Already back then the barrier to entry was by far the lowest. But v2 had lots of issues. V3 is a completely different beast though. I recommend reading the v3 whitepaper if you’re interested in how things work. But basically it went from an interesting platform that did a great trick, to a very solid design that has thought of everything.

The official earnings estimator was a kind of a mark against them for a bit, but to be fair, it was never intended to mislead. It predated any significant testing of the network and thus was built on a lot of assumptions of possible future use. That said, I think it’s not really worth mentioning anymore as it has been taken offline. At the time I made an extensive suggestion to replace it and even made my own alternative in google sheets. You can find that here: Realistic earnings estimator
I don’t fault them for focusing on the core product first and this estimator second. Storj staff on the forum now regularly points people to this alternative, which further proves there was never an intention to mislead.

Honestly, I can’t currently see serious businesses consider any of the other major decentralized storage platforms. I hope that changes. But for now I would only feel ok to pitch using Tardigrade to a local business looking for cheap cloud storage. Honestly though, I think it would be good for all players in this market if there are multiple good decentralized storage solutions. So I’m rooting for all of them.

Whoops, sorry… wrote a novel again. :wink:


Thanks for the extensive replies, really very much appreciated!
How about the economics of it all, how is the ROI for SNOs, miners?

1 Like

It may have been slightly drowned out in the wall of text on my previous post. But have a look at this link.

This should give you the best idea of potential earnings.
I would say storj is very fair pay for the services provided. It’s long term sustainable, because there is something in it for the customer, the node operator as well as storj.

1 Like

My current ROI is around 10 months per storage node (where one node is one HDD, factoring in power consumption and hardware apportionment, using new hardware prices) and not reflecting Storj token price fluctuations.

1 Like

Seems like a question for the 0chain forum. Or were you just here to complain?

1 Like

im asking tiha, not you…

No, you’re asking on a public forum, where everyone is free to respond. And I’m saying you’ll likely get a quicker and better answer if you ask that question in the correct place. I’d say a 0chain forum probably has a lot more information about 0chain than people here.

Hi, I am the author of that article.

I applaud you for creating a much more realistic node estimator, but I think you are too generous when saying that having such a misleading calculator for so long is forgiveable. Especially considering web devs are likely different people to ‘core product’. checked back and actually I pointed this out to them in November last year, see my tweet:-

Your comment that Storj has a lower barrier for SNO entry is a double-edged sword, it also allows for more casual SNOs to populate the network without serious commitment.


Welcome to the forums! Good to see you here!

I don’t think I said anything close to that. When dealing with a company I don’t think in terms of forgiveness. When there is a problem, I point it out and I will keep getting back to it until it’s solved. The post I linked where my alternative calculator lives used to be a post complaining about the unrealistic original one. The original post is still there. Clearly we were on the same side. I just don’t see the use of holding a grudge about something that got fixed. And yes, it does matter to me that there was clearly no intent to mislead. That would speak to the trustworthiness of Storj Labs. In the end my conclusion is that they can be trusted to do the right thing, it just may take some time before they get there. But seeing how responsive they have been to concerns posted here on the forums, I even see improvement on that point. But trust me, I’ll go to arms to fight for making things better as soon as I see something that I think needs to change. But lets look to the future instead of dwell on the past.

While this is true when looking at individual nodes, the use of a vetting system, held back amounts and most importantly erasure coding for redundancy, this really isn’t a problem for the network. There is already a fairly solid base of trustworthy nodes. A few of the newer nodes not being reliable is not going to endanger data. You didn’t really get into how redundancy and data protection is done on each distributed network in your article, but I think that is essentially where the biggest difference between “production readiness” originates from.

Depending on how deep you want to go into this, I really recommend reading the white paper on this: Storj Whitepaper V3
But in short Storj uploads files split up in 64mb segments, which are then erasure encoded into 110 pieces. These are uploaded to 110 different nodes, but uploads are stopped when 80 are finished successfully. This ensures that slow, offline, non-responsive nodes don’t impact upload performance. Of these 80 pieces only 29 are needed to recreate the segment. Downloads similarly start more downloads to account for slower or less reliable nodes, but stop when 29 pieces are received. As for resilience of the data at rest. As soon as the number of available pieces on the network drops below 52, repair starts, which essentially downloads 29 good pieces and recreates 28 pieces so the total number is back at 80.

I skipped a few steps and the exact numbers change from time to time, so forgive me if the exact numbers aren’t 100% correct. But the point is that the network can easily work around less reliable nodes. Especially considering that misbehaving nodes will be disqualified soon anyway. The network is resilient to the point where disqualification isn’t even so much a data retention measure, but also a measure to limit recurring costs of repairs that unreliable nodes would incur.

That’s a really long way to say the network will be fine. If you want to longer version, please give the white paper a try. If you want a shorter version… apparently I’m not your guy. :wink:
Anyway, welcome to the discussion!


110 different AND IP spatially separated nodes.

This is a very important component of the decentralizing of the data. Access to the data is centralized through the satellites, but the data is spatially decentralized across IPv4 space.


Thank you for your reply, I occasionally check the forums and notice your replies are extremely helpful.

I have been an SNO since December and I am familiar with the WPs already. However, the repair threshold of 52 is not documented afaik, I was only made aware of from your posts. Obviously since clients are reliant on the satellites anyway, it is their concern to adjust this figure as they choose, but this lack of transparency concerns me.

As does the fact that my Reddit post was not visible when browsing new Reddit posts in r/storj. (I am not accusing anything, it could just be a Reddit config thing rather than being deliberately suppressed but my other 4 posts were immediately visible in their respective subreddits).

As does the fact that we are reliant on storj to tell us how many nodes there are and how much storage is available/used. Storjdan had to chip in on a Reddit thread where I was correcting head guy of Sia on storj capacity & usage with actual figures as they could not be found publicly.

I also found in my research the approx 30% (I experienced around 35%) overhead to download unused shards of data (the other 10 from 39). As far as I know this was not documented before but I have since seen this on a to-do list on this forum so at least it is now in the open and being worked on, but most people will not be aware of this and will not be receiving the value that has been advertised so many months already (50% AWS).

I will stop there, but you get the idea. I see an extremely slow improvement but I think it’s the excuses or lack of acknowledgement of a problem that rubs me up the wrong way more than anything.

as i know when you download it back you download first 29 pieses and no more, you just get 10 more picies info if some of nodes in first 29 are offline in some reason. It is posible that you start download all 39 and when first 29 over, all other will be canseld, and you pay only for fully downloaded pieses. So there is some oferhed in transfered data, but not i price.

Also it is not posible meke thing fast and relible, if make all all at once and something go bad, very had to find what go wrong. Avery stem must be checked and tested.

Most of the earlier documentation has this number at 35. They may lower it again in the future but that’s obviously something you want to be careful about. I don’t agree that this should be set by the customer though. The customer has an SLA with Tardigrade/Storj Labs, it’s up to Storj Labs to match that SLA. Since the costs of repair aren’t paid by the customer, there would be no downside to setting the repair threshold to 79 and trigger a repair on every piece lost. But that would be really expensive for Storj Labs. As for transparency. The information is out there if you look for it. It’s been a while since I dug into the config files for the uplink or gateway, but if I remember correctly the settings are in there as well.

I have no experience with Storj on Reddit, but they could definitely suppress negative posts here on their own forum if they wanted to. That never happens though. Not even the most unreasonable falsehoods will be removed. So I don’t know what happened on reddit but I’m certain you weren’t intentionally censored.

As for reporting on numbers of storage nodes. Every town hall has a ballpark figure. I don’t see a problem with that. I don’t think a difference between 7900 and 8300 should matter much. As long as we have an order of magnitude, I’m happy. I’d like to see more, but stats like that wouldn’t exactly be at the top of my priority list.

The overprovisioning of downloads was part of the design from day one. It’s in the white paper, been in several blog posts. So that shouldn’t really be a surprise. I’m currently not entirely sure what the impact on the service bill would be. If you feel like that should be better clarified, that’s fair feedback.

Please don’t. Feedback is valuable and in my experience works to make the product better. Rather than posting it all in one topic here though, you may see better results if you suggest features in the designated voting areas of this forum. That gives everyone a chance to vote on them and helps Storj Labs prioritize them as well.


Sorry, I disagree. I saw more bandwidth being used and charged for in my tardigrade control panel than I downloaded. When I investigated, I realized that 39 pieces are downloaded and it made sense.

I think it’s fair enough, the alternative would be to not pay the slowest hosts which would of course be unfair and probably impossible to police. My only gripe is making this transparent to the user, so it’s not 50% of aws, its more like 65%.

I think this question deserved own thread and litle more explanation from STORJ
I think tomorrow will be working day so we will find some answers.

It has got it’s own thread, it’s part of Known issues we are working on

It’s point #3

So I am glad to see that it is now openly acknowledged. I had never seen it mentioned in these forums before and certainly nowhere a customer would see prior to signing up.

I didn’t suggest repair threshold should be set by customer? * When I said it’s their concern, I meant the Satellites *

I guess I am used to high amounts of transparency offered by fully distributed systems such as Sia and 0chain. Storj have a long way to go to earn my trust.

My areas of concern are not really ‘voting topics’, but as you suggest, I will endeavour to post distinctly in the appropriate areas so the topics get the attention they deserve.