Hi everyone! I have recently received the notification email on future changes to the pricing model and the new distinction between “Active archive” and “Global collaboration” (see here).
NOTE: Edits are in italics.
I have many questions and comments on this. This post has gotten way longer than I originally anticipated.
Which of the two “tiers” advertised on the website (“Global collaboration” and “Active archive”) is the equivalent of the current Storj system? Is the current default the “archive” and the new tier is an even faster version, or is the current system the fast version and there’s now an additional “semi-cold storage” archive option on the table?
What effect will this change have on the current system? Since one tier is now specifically advertised for speed, my initial suspicion is that the “archive” tier will be much slower in egress. In other words: If I don’t do anything, which tier will my data be (eventually) moved to by default? (This can apparently be chosen on project level.) It’s currently guaranteed only until Oct 2026. Can I expect deteriorating egress speeds now that there’s an extra “fast” tier?
I may be missing something here but this appears to me to be a massive price hike for basically every user except maybe people who have very high egress compared to the amount of data they store. A few simple calculations (segment fees are not included here):
Case A: Cold storage (low egress)
Total storage: 10 TB
Egress/month: 0.5 TB
Old price: 10 * $4 + 0.5 * $7 = $43.50
New price (standard): 10 * $15 + 0.5 * $0 = $150.00 (+244%)
New price (archive): 10 * $6 + 0.5 * $20 = $70.00 (+60%)
Case B: Average usage (medium egress)
Total storage: 10 TB
Egress/month: 5 TB
Old price: 10 * $4 + 5 * $7 = $75.00
New price (standard): 10 * $15 + 5 * $0 = $150.00 (+100%)
New price (archive): 10 * $6 + 5 * $20 = $160.00 (+113%)
Case C: Active usage (high egress)
Total storage: 10 TB
Egress/month: 20 TB
Old price: 10 * $4 + 20 * $7 = $180.00
New price (standard): 10 * $15 + (20-10) * $20 = $350.00 (+94%)
New price (archive): 10 * $6 + 20 * $20 = $460.00 (+155%)
Why is there mention of “global” and “regional” tiers in some of the FAQs? This distinction has never been made before and that was the single biggest advantage of Storj compared to practically all other object stores. What does “regional” even mean in this context when I have no control over where my data is distributed and from where it is retrieved when I download it?
I understand the intention behind moving to a more “versatile” pricing model, however, I think it undermines one of the (in my opinion) greatest strengths of Storj’s offering: simplicity. Compared to the overcomplicated pricing systems of AWS and Azure, it was a great relief to have very simple, transparent pricing with a fixed rate for storage and egress. I see this multi-tier system as a first step towards Storj becoming yet another unnecessarily complicated cloud service with various tiers, storage types, pricing levels etc. Not very appealing to me and certainly not a USP.
What’s going on with the sudden focus on media production? I’m not keeping track of all the news and blog posts so maybe I’ve missed this being a big point before. But in my view, Storj has been a unique, “generic” cloud storage solution for any use case, just like S3 or GCS. Now it appears it is being geared towards (and priced for) “media production” companies. Why? This seems to be a new customer segment Storj is focusing on. Question is why the whole pricing page is suddenly geared towards this group. This makes Storj a very narrowly-marketed service.
A few notes on the new pricing page:
(a) As pointed out by others elsewhere, the claim that egress is included is misleading if it is actually capped to the total amount stored.
(b) The deliberate confusion of all prices being given per TB but the very high egress price of $20 being given per GB to appear lower is a bit petty, to be honest.
(c) The strengths of each tier and esp. its purpose could be better communicated. While “active archive” is sort of clear, “global collaboration” is very vague. In the end, this appears to me to be a typical hot storage/cold storage distinction.
I think you’re on to something here. Isn’t the point of Storj to undercut, let’s say, Backblaze B2 in price? B2 is now cheaper (for most people). Yes, it’s not as globally distributed, but still. We’re headed in the wrong direction.
Wow, that’s quite the price hike… I use STORJ for nightly proxmox backups. Currently paying $4/TB. I never generate egress traffic unless a VM fails and on-site backups fail (hopefully never). Because the files are rotated out nightly, I’m now looking at $15/TB. That’s nearly a 4x increase? I guess I’ll have to rethink my backup strategy here…
Please read all the Frequently Ask Questions section on the new pricing page
There are some details that were not considered in your comparison. Specifically, what amount of Egress is actually included in the Global Collaboration tier, the charges for small objects that exist for both Global Collaboration and Active Archive tiers, and the fact that if you are on Active Archive tier and delete files before the 30 days retention period is up, you will be charged the same as if you had not deleted them early.
@vadim: Even if someone uses Storj purely for backups, the new “Active Archive” is still a 50% price increase compared to the previous $4/TB. And didn’t they just reduce the storage price from $5/TB to $4/TB a year or two ago? And now it goes back up to at least $6/TB? Seems rather random.
@heunland: You’re right, I’ve missed the egress limit in the calculation. I’ve updated the calculation of Case C and it makes it even worse. I don’t know what charges for small objects you’re referring to. I thought the segment fee is removed? Do you mean the new rule that very small objects < 50 KB and < 100 KB are now “rounded up” to that size? I don’t think that makes a big difference in the calculation, though. But depending on what data you store, it may cause an even bigger price jump.
And regarding the retention period, I don’t think it matters much for this very simple calculation. Especially if you use the service just for long-term cold storage, I presume you keep most of your data over many months or even years, so a retention period of 30 days doesn’t make a huge difference. But again, all it does is increase costs even further.
As mentioned in my original post, there’s many things about this new pricing scheme that are strange and poorly communicated, esp. about its effects on existing customers. If I understand correctly, I have to switch whole projects to the new pricing system. Would have preferred it at bucket level since I have many buckets in the same project that I use for many different things.
We have reached the point at which we are able to outperform other cloud storage solutions. We are now able to address a new usecase and these new customers are willing to pay more for a good product: Production Cloud by Storj | Cloud Post-Production made easy
There are more changes incoming. The new global tier is more than just a pricing change. More on that will follow in my next answer.
Are you uploading the whole thing every night from scratch?! Then the new pricing does its job to prevent abuse like that. Why don’t you do incremental backups?
Yes, full snapshot backups. My choice of method is not the discussion point here. I am aware there are much more efficient methods. I’m not sure what they’re trying to prevent. Ingress bandwidth on the SNOs? I don’t know about you but I wish I’d see more ingress bandwidth on mine. No worries though, I’ll just change it to once/month upload. I was only doing daily 1) because I could 2) the storj concept is neat.
The plan long term is to have 3 tiers. global, regional and archive. The current system would match the regional tier. If you upload a file most pieces will end up on nodes close by because they win the long tail race. Some pieces might end up on nodes further away. If needed we can add geofencing or soc2 compliance. These features would fall under the regional tier.
For the archive tier we might reduce the long tail. That would make this tier a tiny bit slower than the regional tier. We still want to offer high durability so I don’t expect a big difference in the overall RS setting. Just a lower long tial sounds like the most possible outcome.
The global tier will look a lot different compare to the current storj system. As explained in my previous answer we are targeting a new usecase. Media and entertainment companies are globalized now. In one location in the US they might record some video clips and video editing will be in the EU. The current system is already fast but there is room for further improvements. If we upload at least 29 pieces to nodes in the US + 29 pieces to nodes in the EU it would allow full speed downloads from nodes close by in both locations. That is what the new global tier should be. (US & EU is just placeholders here. Ideally the global tier uploads the file with enough pieces in all regions)
Right now these changes are not in place. Short term it looks like we will start with global and archive tier. The regional tier might follow later. There are some ideas floating around about the new node selection for the global tier. We need to test them first. I don’t know the exact timeline and even the informations above might change at any time. So short term I would answer your question with “there is no performance difference between archive and global until we are going to implement these changes”.
I hope this answers most of your questions and not just the first one. Let me know if you have any follow up questions. Please be aware that these informations are not set in stone. It is just my understanding of the overall direction we are heading.
One way of pricing services is to charge for everything. Every API call, every byte transferred; for every use of the infrastructure. This is the most fair way - you pay for what you actually use – but very cumbersome.
Another – charge flat fee to cover average usage by users and discourage abuse by rules.
And everything in-between. Most archival storage tiers have low storage cost but long minimum retention charge. Most hot storage services have no minimum retention charge – but higher storage fees. Ultimately you are paying for using the infrastructure either way.
If your use cases is uploading and deleting files daily – you need hot storage, archival storage is just not for you, and provider does not want you to use it in such as way – minimum retention fee is to guide your choice to the correct storage tier. In other words, they give you a discount on storage if you promise to store files for a long time. But if you don’t plan to store files for a long time – then don’t use that tier.
Seems you didn’t account for a segment fee in your cases with the current pricing.
It drives costs very often, for example when you use a Cloud Sync task in TrueNAS - millions of tiny files uploaded as millions of tiny segments. Of course, if you would use a TrueCloud Backup task (which is restic under the hood), then you use segments more efficient, however, they are still exist.
In all new pricing tiers there is no more a segment fee. Yes, there is a rounding for small objects still, because our costs for metadata is not disappeared, but it will not affect at least objects bigger than 100KB for an Active Archive which is designed exactly for backups, where you stores data and rarely downloads it back.
By the way, if you would need a one file from the full snapshot you maybe forced to download the whole snapshot - depends on a used backup tool (for example, restic can restore a single file without downloading a full snapshot).
Isn’t that something that has been even enforced through the new node selection algorithm that was introduced just a year ago?
I have always wondered about the piece distribution and the usefulness for users:
Because close to the uploader is not always the preferred option for every use case.
I believe ideally uploads should be fast. This means minimum number of uploaded pieces, fast nodes, nearby, longtail cancellation and depending on upload resources either uploading in parallel with expansion or through gateway without. This should get a great and fast upload experience for users.
Same is true for downloads, however download location is not necessarily where the upload happened. It might even change after the data has been uploaded (just think of a camera to cloud upload in Turkey intended for the producer in Los Angeles that have to made available on the fly for post production improvements for a freshly contracted company in New Zealand.
So what I am trying to say is that at the time of uploading a user might not know what is the best distribution selection for his files. Close to himself at the time of uploading, close to one or more other locations or even varying locations.
My idea of an ideal solution would be that a user will always have the fastest possible upload and can choose on bucket level, what distribution he needs, more globally or more regionally. And for the best user experience this would mean that he can change this anytime and the network takes care of distributing pieces best suited for the intended coverage so that a user has to upload only once.
@littleskunk: Thanks for the explanation. So what you now call “regional tier” is basically the “classic” Storj system? If this 3-tier structure is indeed the goal, then why start by announcing two tiers and not even mentioning the default “regional” version anymore?
And I’m neither using Object Mount nor am I interested in media production. Still, this is now used as an argument for a 2-3x higher price I’ll soon have to pay?
@Alexey: That’s right, I haven’t considered segment fees. However, I remember when first starting to use Storj and reading through the documentation, it was said that the segment fee was usually negligible and would not affect most use cases. And that’s exactly what I’m noticing in my invoices. The segment fee is just ~10 % of the total monthly costs despite sometimes having large numbers of relatively small files. In months with high egress, it may be < 3 %. Sure, there’s other use cases where segment fees are more significant but I doubt that it would fundamentally change the example calculations above for most users. After all, we’re looking at a typical price increase of 100-150%.
@jammerdan: I like that idea. It’s sort of a “soft regional” approach. Instead of the strict boundaries of AWS etc., you can instead adjust how important different regions are for you. A simple “regional to global” slider or similar would already be a great option. However, I think this idea is very difficult to implement since it could increase the network processing effort and traffic dramatically when millions of segments need to be moved around every day.
And a few general remarks:
The naming of “regional tier” (if that’s really being planned, as suggested by @littleskunk) is really poor, in my opinion. If I understood it correctly, this is just the current default system. So the “regional” component is just happening by chance because of the distribution of pieces.
This name would cause confusion because in the cloud domain people associate this with AWS regions and that comes with all the caveats Storj is actually trying to avoid (having to manage services across regions, different egress costs depending on location etc.).
The whole topic of this resulting in a drastic price hike for most users has not really been addressed here so far. Same for the communication of this change and the ambiguous wording in the pricing explanation (see points 5 and 7 in my original post). And the “active archive” tier is still much more expensive for the typical cold storage user (see point 3, Case A), presuming you don’t just store millions of tiny files.
If this change is also done to reduce “abuse”, as suggested by @arrogantrabbit, then it is simultaneously now allowing other forms of it. Users may be more cautious about egress or overall storage volume due to higher prices. But with the removal of segment fees, you allow for exactly the kind of pattern that was previously so heavily advocated against: uploading large amounts of small files. I’m not familiar with the technical implementation but why is the concern about small segments suddenly gone? Maybe @Alexey can say something about this?
I’ve updated the original post to reflect some of the things discussed here so far.
Changing the download location wouldn’t be a problem. We can upload enough pieces to all location and don’t need to pick just 2 locations.
It will be possible to pick the tier on the bucket level. Switching the tier after files have been uploaded is currently not planned. That would cause a lot of repair traffic. It is not impossible. The system can handle that. It is just a question of how much would it cost us to allow that. How many customers need it. Most media and entertainment companies will pick the global tier from the beginning and never ask for a downgrade. Lets scope that out for now and look into this in a year or so. If enough customers have a need for switching the pricing tier we can implement a solution for that.
That is my understanding yes. I might be wrong here so lets wait for more details about the regional tier.
The regional tier isn’t finalized yet. Thats all. We had internal discussions especially around the pricing and the easiest way to resolve it was to scope it out for now. We need a bit more time to work on that offering. I would expect a better understanding once we have the improved node selection for the global tier ready.
It might be a bit early for that. You can continue with the legacy pricing for another year. The regional tier might be a better fit for you. If not you can also close your account at any time. This pricing change isn’t going to work for all customers.