I’m currently trying to decide between storJ and wasabi for both personal use and to business use. More than 30TB of data.
The following are what keeping me back regarding storJ
StorJ uses satellites to process the data. What level of redundancy does this have? Could there be a case where a satellite goes off/building on fire/city under attack and we completely lose all of our data forever? (without satellite data there’s no way to map the data that exists on nodes?)
The per-segment fee of $0.0000088 is a bit weird. I wish this didnt exist. Any unpredictable pricing is far from ideal (that’s what I like about wasabi). Also, It’s not exactly clear what a segment is?
A satellite is already a collection of multiple servers in different locations. So if one server burns down you wouldn’t even notice it. Let’s say for some reason they all burn down at the same time. Your data would still be stored in the network and can be recovered as soon as the satellite gets back online. One day we might also implement a migrate or copy to another satellite.
The segment fee is in place to make small files expensive. Ideally you store big files. Some great tools to manage that are Duplicati or Restic. They will combine many small files into a zip archive. I wouldn’t worry too much about it. The free tier coupon should cover most of the costs anyway.
As far as I know wasabi also has some hidden fees. It is hard to tell which one is actually cheaper.
A specific Satellite instance does not necessarily constitute one server. A Satellite may be run as a collection of servers and be backed by a horizontally scalable trusted database for higher uptime. Storj operates clusters of Satellites in regions, with all Satellites in a region sharing a multi-region, distributed back end.
For more detail on ‘what a segment is’ this post should help…
As @littleskunk noted the segment fee was due to some customers uploading a large number of tiny files which created excessive overhead in the satellite metadata databases.
Okay, it makes sense. I wish there was a workaround.
After doing some calculations, it seems that storJ is far from optimal for certain storage cases.
For example, we have some buckets (currently running on backblaze) which have millions of files. One backet has 10M files.
These are all small files, because they’re forum attachments, ugc and similar stuff.
The price for this type of storage will be much higher with storJ.
I do like the idea of storJ alot however and I will keep supporting it. I will be using it to store backups (7z files). I’ve also setup two nodes (I dont expect to make money, it’s just plain cool)
The example of segment fee calculation: Usage Limit Increases | Storj Docs
The 10M small files (less than 64MB) would generate 10M segments.
The price per Segment month is $0.0000088, 50,000 segments are included, so the fee would be $0.0000088/mo * 9950000 = $87.56/mo
If you could compact them into files with sizes not less than 64MB, then you likely would have 156,250 segments (10,000,000 * 1MB / 64MB = 156,250), this will cost (156,250 - 50,000) * $0.0000088/mo = $0.935/mo