Niche use cases nobody needs. That works (I will just trust you on that one) only for native. Compare S3 with S3.
Here is a real use case:
You upload 5GB in the US and download that 500k times world wide with HTTPS and then we compare the storj bill with the Cloudflare R2 bill.
That the performance is bad, we already know, just look at TrueNAS who use storj as “CDN”.
How did that work out so far?
They don’t. If you really need fast storage, you get it locally without peering.
You misunderstood, we have 60PB paid. And 60PB is nothing.
Yeah it looks that way, when you start with nothing.
See, we had this discussion in the Tesla forums back in 2016. Tesla doubled sales almost every year. So people there naturally assumed that Tesla will become the biggest brand by 2024 and overtake VW and Toyota.
So even if storj would manage to double storage every year, it would still take a few years of doubling, before the number gets impressive.
If they can survive that long. And under the assumption that a paying customer is actually a net positiv for storj.
Claim asserted without evidence can be dismissed without evidence.
Yes. To get promised performance you need to use native.
This demonstrates ignorance of the fundamentals.
It is like buying a Ferrari and then judging it by grocery runs against a Honda Civic. The Civic will win on cost, comfort, maintenance, and ordinary errands. That does not mean the Ferrari is pointless. It means you picked a use case optimized for something else.
S3 is Amazon’s protocol. If you have a pile of legacy systems, storj offers S3 gateway software, that you can host on your own systems, to start using the service, while migrating clients to native
If you are that lazy that don’t even want to host that yourself, or simply don’t care about performance – storj even hosts that gateway for you for free. Yes, performance will suck, because this centralizes the network, and you get drawbacks of both worlds, but it’s your choice.
Comparing performance of a third-rate S3 adapter with Amazon, whose entire stack is built around S3, is apples and oranges.
Now you are complaining that your Ferrari cannot haul the trailer.
If you need CDN — use CDN. There are services built for that. If your use case is tiny storage and massive egress – obviously, a plan with free egress will look amazing and a plan that charges for egress will not. How is that Storj’s fault?
Pounding screws with a microscope here and then declaring microscope is bad.
One of the use cases Storj is genuinely good at is distributed media-like workflows where large files need to become available across geographies almost immediately.
Truenas is not an authoritative example of anything anymore.
They use S3. They did not even bother to bolt native integration into their appliances as part of “partnership”. They abandoned FreeBSD and moved the whole system on linux. Nothing they do makes much sense anymore, and I would not use them as a benchmark of anything.
This is a an example of an encshittificcation in progress and I’m deeply disappointed in the company direction, and no longer use their services or hardware, let alone as a benchmark of correct design.
This is marketing issue, not a product issue. So I’m not sure what are you arguing.
Again not so true, our Gateways are distributed, just not the same scale as nodes. So, it also works for S3. Depending on the customer’s connection S3 might be faster or Storj Native might be faster.
Not always, if the “Local Storage” is not on your local network/the same computer or your VPC with a direct connection to you, but rather in the cloud. I especially recommend looking at time deviations (evening/day/morning). Peering is usually faster than downloading from a single cloud server without using a CDN/cache proxy. If you add a CDN to the equation, then to ensure comparability you also need to use it for Storj.
I’d be interested to see how this works in your location using Storj Native, Storj S3, and your favorite cloud storage, all three without a CDN. Please also provide the speedtest results for your location.
Obviously I was not sarcastic enough for you.
Of course 60PB is nothing compared to 3 Exabytes that Wasabi is storing which was founded only 2017 or so. So it is even younger than Storj.
And I am not even talking about the hundreds of Exabytes that we can expect on AWS, Google or Azure. Even late comers and smaller providers seem to be able to gather more data stored than Storj does. But if you focus mainly on one industry then this is an expected outcome.
This is only one way to look at. But for a company that exists like 2014 at some point it is not enough. You have to look at how much competitors can gather and also at the business environment. And you can clearly see that data created worldwide is exploding. Required data storage is exploding. Competitors building Exabytes data storage facilites and (Wasabi) was able to gather 3 Exabytes of customer data since 2017.
If you take that into account the success level is low even if Storj doubled stored data from 30PB to 60PB. Additionally as it is unclear how much of that has to do with recent Storj decisions to distribute data more:
Maybe this was also done for already existing data as we have seen a lot of repair after this announcement. So that makes it unclear how much of the increase from 30PB to 60PB is in fact new data or from new customers. If you follow Storjs Linkedin you see basically the same customer references over and over again.
But as said, even if it is all new data, 30PB increase is nothing these days when we can read that even single small customers have Petabytes of data that they are moving to the cloud of the competitors.