Cloud pricing ranges widely across different offerings. The cost of storage hardware has steadily decreased and the cost of bulk bandwidth has been trending downward dramatically over time.
Different providers are targeting different use cases, and even the “simple” pricing models tend to have conditions that are attuned to specific use cases. Wasabi, for example, has a minimum object size, minimum term, and limits on egress.
All that being said, we have a cloud object storage platform that is positioned both by differentiated features and price between the low cost providers and the hyper scalers.
Our goal is to deliver pricing that is compelling for the use cases we serve. We have the added challenge of ensuring that we provide sufficiently compelling economics for Storage Node Operators (SNOs) that are sufficiently rewarding to ensure a long-term, stable supply of capacity and low node churn.
The challenge for us is to make sure the incentive structure works in a complementary way to achieve that goal. We’re continually watching the feedback from the community and looking for ways to tweak the model. Your observation is correct that ultimately the balance of price and Storage Node Costs are directly related to our ability to generate sufficient demand ultimately to support a health storage node population.
It’s not quite chicken-and-egg level, though. We’ve used higher bandwidth payouts and surge payouts to reward our initial pool of SNOs and we’re looking at ways to make pricing more attractive to customers, especially in use cases that drive better economics for SNOs. The number of use cases we’re able to address and particularly higher bandwidth use cases is expected to increase substantially with our upcoming hosted gateway service. That will be a game-changer for customers and SNOs, but may require some optimization to the incentive model.
Thank you for responses Time flies when you are having fun! This concludes the time allotted for the hosted portion of the Q&A, and we’ll be closing out the threads shortly.