Obviously I was not sarcastic enough for you.
Of course 60PB is nothing compared to 3 Exabytes that Wasabi is storing which was founded only 2017 or so. So it is even younger than Storj.
And I am not even talking about the hundreds of Exabytes that we can expect on AWS, Google or Azure. Even late comers and smaller providers seem to be able to gather more data stored than Storj does. But if you focus mainly on one industry then this is an expected outcome.
This is only one way to look at. But for a company that exists like 2014 at some point it is not enough. You have to look at how much competitors can gather and also at the business environment. And you can clearly see that data created worldwide is exploding. Required data storage is exploding. Competitors building Exabytes data storage facilites and (Wasabi) was able to gather 3 Exabytes of customer data since 2017.
If you take that into account the success level is low even if Storj doubled stored data from 30PB to 60PB. Additionally as it is unclear how much of that has to do with recent Storj decisions to distribute data more:
One of the ways Storj plans to improve the performance of the Global Collaboration tier is by increasing the expansion factor of the data stored there. As the utilization of that product increases, the space consumed on nodes should accelerate faster than in the past.
Maybe this was also done for already existing data as we have seen a lot of repair after this announcement. So that makes it unclear how much of the increase from 30PB to 60PB is in fact new data or from new customers. If you follow Storjs Linkedin you see basically the same customer references over and over again.
But as said, even if it is all new data, 30PB increase is nothing these days when we can read that even single small customers have Petabytes of data that they are moving to the cloud of the competitors.