This is so funny to me. I used to argue in this forum that Storj can’t compete with AWS (back in the day, that was the goal, to be faster and cheaper and become bigger than AWS). I argued that the main competition of STORJ is Backblaze, just because from a technological standpoint, STORJ probably can’t be as fast as AWS S3 or a CDN.
That is why I always argued for something exactly like “Active Archive”.
Now we finally have it
But the world moved on.
TB/$ pricing is exactly the same, but the egress is three times the stored amount free on Backblaze B2, while STORJ charges 0.02$ per TB.
There was almost no data in the storj network a few years ago, and this fact hasn’t changed. I doubt this will change with this unattractive offer.
I work with a media-adjacent company. I see that it is super convenient to control AWS storage classes with lifecycle rules: “We expect this dataset to be in active use for 14 days, refreshed after each retrieval, then we want to keep it for the subsequent 2 years in glacier to have it available just in case, then kill it.” Bam, instant savings. Automating reuploads would be a big chore.
If anything, I think a “downgrade” action should be possible and cheapish for Storj, because that’s just removing blobs from some nodes. And storage downgrade is the most common path anyway.
I know, I reloaded enough StorJ tokens for a few years to get around the min charge and now between the higher rates and if I need to download anything it is going to cost alot more than B2, I’m second guessing my decision.
Has anyone canceled and got a refund of the USD $ in their account from sending/converting storj deposits? I really don’t see a benefit over Backblaze B2 now, costs more. This even makes Telynx’s S3 attractive.
You are only focusing on price.
You are not seeing benefits because price is the only parameter you consider.
There are cheaper storage providers that Backblaze. I feel that even at this pricing storj is underpriced. If you don’t need what it offers — then it’s a wrong product for you; you shall not pay for features you don’t need. If Backblaze works better — user Backblaze.
I would not touch Backblaze with a rotten stick, it boggles my mind that you are even considering it, but we clearly have different requirements and that’s ok. It does not mean that pricing is wrong.
I use Storj (for fast speed and price, I don’t have segments/ lots of small files) and Backblaze with Telynx as a backup.
I get tired of Storj changing prices and terms. I get it’s a small company growing and most companies when they do this price out most of the intial people that got early deals. Many small companies I’ve used they either locked me in at a low price with legacy features or forced a price increase way above their initial prices. I just hope that storj will stabilize their prices and terms, but I get it at the company current size and stage.
I understand that, but everyone, including Backblaze and Wasabi, are doing the same exact thing. Backblase rose prices recently, and Wasabi managed to raise prices from $3.90 to $6 in the span of two years, and cap egress, while they were growing.
Storj gives existing customers a year on a current plant, I think it’s quite generous.
I don’t see anything wrong with that. If anything, I’d be happy if the small company I supported early matured and now can charge everyone grown-up prices. I would not want a discount from them just for showing up early in the past.
I’m sure they will keep adjusting everything to maximize growth and profit. And that’s great.
Unfortunately most of TrueNAS users were affected, until TrueCloud Backup task was implemented. But some still use a CloudSync, because they want to see their objects also via Storj Console. With restic backups it’s not possible - you need to connect this repository and use either TrueNAS UI from the same TrueCloud Task or use restic from CLI or with UI addons.
They are not gone, the metadata is still costly, so we have a Minimum Object Size, it’s 50KB for Global Collaboration, all smaller objects will be accounted as 50KB of storage each; and 100KB for Active Archive, all smaller objects will be accounted as 100KB of storage each.
Thus you may use objects greater than that, but very small objects will be “rounded” to the related Minimum Object Size.
As SNOs, many of us wanted a price increase for clients, because Storj has unique features that nobody else has. So now, I can only applaud this decisions.
For us it means, I hope, no more payout cuts, for Storj it means it becomes proffitable and can sustain a long term business.
For clients… well, your data is protected against Thanos attacks, asteroid impacts and alien invasions. You pay premium for a premium service.
Soon (relatively), we will spread the cloud in the entire solar system and the data will be protected with 100 nines.
True. But suggestion was to increase prices when everybody else does it and every customer would understand it due to high inflation rates.
Now we see on Active Archive: Storage $6/TB vs. $4/TB and Egress $20/TB vs. $7/TB.
But I fail to see the additional features that justify that increase. Ok, segment fee has been eliminated but that was meant to be negligible anyway.
Instead we have now minimum storage period and minimum storage size for basically the same product?
And especially for backups the egress fees are bad, because if you need to restore hundreds of TB then …
They said they are already or are about to become cash flow positive in the last Town Hall. At least it sounded like they were referring to the pricing that existed back then.
I haven’t delved into all the technical details, so I apologize in advance if I don’t understand something. I’m interested in the service exclusively from a customer point of view.
So what happens if the gateway/satellite registry is damaged or destroyed, say due to some major environmental crisis such as a flood, earthquake, nuclear war?
As far as I understand, storage nodes that are managed by storage operators - they only contain encrypted data segments, but there should still be a central registry with the client’s metadata/projects/buckets/keys etc…
As far as I understand, it is this data that represents the main commercial value of the company, and therefore control over it belongs entirely to them. That is, how is reliability fundamentally different from B2, where instead of your data storage nodes there are data centers with RAID arrays? If without authorization from the company - all this data is useless, what difference does it make where it is located and how many hundreds of times it has been copied, then what is the point of all this?
In short - here, in Storj, every service is distributed, it’s not one server, but hundreds across the globe. Even the database. So yes, the satellite is one point of access to your data (you actually seamlessly connecting to nearest instances to your location, thus you may register on EU1, however, pieces of your data and metadata can be in US or Australia, etc., unless you restrict it), but data itself is decentralized. You cannot take a server from DC with your data and copy it, you need to download it from thousands nodes (and not only storage nodes, but also satellites, if you have small segments).
Whatever use case you have, and for whatever reason you think storj Active Backup is has a more compelling offer than Backblaze S2, somehow the network still has bad data growth numbers.
So while I would be interested in you use case if you don’t mind sharing it, my guess would be that this is a niche.
@Alexey: Sure, the “small file charge” is not gone entirely but we’re now talking only about tiny files under 50-100 KB. Before, you charged “extra” for anything that was smaller than the 64 MB segment size. So users had to be very careful even with small but not tiny files (photos, images, larger documents etc.). That’s not the case anymore under the new pricing scheme. Therefore, I’d expect a significant rise in the number of segments for many (esp. new) users.
And regarding the pricing discussion (@snorkel and @jammerdan): I don’t have anything against a price increase since Storj is currently really very low-cost. I’ve been wondering for a while how long the low prices will last given the unique solution that Storj offers, the good performance etc. I’ve expected a price increase but not a doubling or tripling of monthly fees alongside (as of now) a complete abandonment of the current pricing scheme and tier. This is even stranger when considering that prices were adjusted down (!) from $5/TB to $4/TB a year or so ago. And now we get this massive price hike.
I agree with @jammerdan that the Active Archive tier is just much more expensive with no additional features justifying the price. I was quite happy paying for exactly what I used via two flat fees (storage and egress). Instead, the new Global tier almost quadruples the storage price (from $4/TB to $15/TB) and tries to “fool” you into thinking that egress is free (which it isn’t really when you read the details). All the while, the Archive tier increases the storage price by 50 % and the egress price by almost 200 % while only adding more limitations (minimum retention time).
I’m ok with having multiple tiers but this new pricing scheme seems half-baked to me.
Time is passing by faster than you might notice. We implemented a lot of improvements to the point that we can now talk to customers that are willing to pay more for a better service. One of the big improvement about a year ago was a new node selection. We didn’t stop there. Over the past year we have improved the node selection more and more. The public network didn’t notice that because it was all done in the storj select network so far. The point is customers that asked us a year ago just got a “yea in theory we can do better but we don’t have the solution for the problem yet”. Now this turns into “sure our node selection toolkit has some options for that. Lets try it. It should solve your problem.”
The global tier is for customers that want more performance than the current product provides. If you are happy with the current product than wait for the regional tier please. The global tier would be overkill for your usecase.