Update to Storj Object Storage pricing – what node operators need to know

Do you plan to have an easy migration path from one model to another, let say, moving data from a global/regional workflow to the archive model? Like, maybe through S3 lifecycle rules?

1 Like

This is a big if. Why would vpn slow node? It adds one hop on the very low latency network and depending on provider may actually result in better routing and overal better performance.

I don’t buy this whole “people cheat with vpn” spiel. For one — why would anyone sell their integrity for literally a couple of extra bucks a month, and another — it’s trivial to detect by correlating traffic latencies, let alone downtimes. Sounds like a pretty dumb idea.

The miniscule amount “cheaters” can be ignored. They then have $5 and no self respect. No harm to the network. Remember, nodes are already assumed Byzantine.

1 Like

With this pattern perhaps a better tier would be an Archive, it doesn’t have a segments fee, and looks like your bill wouldn’t change.

Right now I think it’s tied to a bucket or a project level, the migration may be done with a repair, if there are different expansion factors. If they are the same, it’s just a tag for billing.
Not sure that the migration between global and regional can be always performed with a repair. In some cases you would need to re-upload the data.

1 Like

Hopefully I’m wrong here, but migration from legacy to any new tier seems like the bucket/project will need to fully transfer to a new set of nodes or something like that. On the date that all legacy tier users need to transfer, how would that be handled?

Yes, we are looking at supporting movement between tiers either on a per-object basis, lifecycle rules, or something else.

As Alexey explained we do already have some limited features in repair to handle similar things but there is more work needed to handle tier movement.

2 Likes

We are hoping to not need to do so. I can illustrate how based on capability we have today. For example, if moving from Legacy Global → Global collaboration is a large increase in number of nodes storing the data (increase in expansion factor), our existing systems already can handle that bump while keeping all the existing pieces. We are finalizing more details about Global collaboration so I hope to be able to share more in the coming weeks. Because it is not finalized yet there is a still a chance we land on a plan that might not keep the original pieces. Again, we are hoping to not need to do so.

3 Likes

I want to know, like in your document, it says

  • Global Collaboration – Multi-region.
  • Regional Workflows – Single-region, SOC 2 compliant.
  • Active Archive – Optimized for instant access.

Does it means the way the data-distributions will be changed? Like in the past, the data stored my data across the global, mix of different providers, from home lab to real company scale providers inside data centers. Now, if I choose “Regional Workflows”, you must dump the home-lab providers? Since I can’t know how that work with SOC 2 compliant, and how would you enforce the region rules.

Would the self-host home-lab provider be depleted? Like I can’t image how would you manage to manage to include home-lab provider to Global Collaboration tire or Regional Workflows tires, as the location and visit requirements…

I am a little bit disappointed, since the price between you and Backblaze are similar (you will be more expansive after the price change). One reason I have choose you as one of my data backup is because I know some of my money will go to the amateurs around the world like me. But seems storj is act more and more like a traditional S3 storage company.

What I think is particularly disappointing is that it seems there is no included egress in the Active Archive which is geared towards backup and restoration tasks. And the price did even increase.
I don’t know about the kind of distribution for that offering, but in the past I think it was great that restoration was fast due to parallelization, resilient due to global distribution and affordable due to price so you wouldn’t go bankrupt over restoring your data.

There are several ways of distribution:

  1. Global, when pieces are distributed across the globe, not only on closest nodes. These can be homies or/and DCs in any mix.
  2. Regional, when pieces are distributed like now - only closest to your location. These can be homies or/and DCs in any mix.
  3. Regional geofencing, when pieces are distributed only in the selected region (for example, only US or only EU or only AP). These can be homies or/and DCs in any mix, but strictly geographical limited.
  4. Regional SOC2, when pieces are distributed only to nodes which have SOC2 facilities and certification (usually DCs) and also usually limited to only one geographical region.

I believe it’s also possible to have SOC2 Global Cloud (across several geographical regions), if required. But usually customers who require SOC2 also require a specific geo location too.

1 Like

This is starting to make sense. It seems that it is an attempt to cater for (some?) scenarios I was wondering about here

So users can choose the kind of distribution that suits best. Finally.

Question: The regions EU, US, AP are the satellite regions? However for geofencing the real geographic-jurisdisctional borders are important.
Like EU could be Europe (e.g. incl. UK) or the EU (e.g. not including UK).
And if it is based on satellite only, it would be required to know what’s included to answer questions like is US include Canada? or if Turkey is EU or AP.

More like geofencing. Satellites already regional, so yes, in most cases they would match.

Sooooo like there will be 4 Plans from now on
aaaaand “Active Archive” at $6/TB storage and $20.48/TB is the equivalent of current plan which is $4/TB storage and $7/TB egress traffic.

i’d like to ask, how that translates to quote:

Which are currently at $1,5/TB and $2/TB for Storj Node Operators.

I know You have yet much time to crystalize final conditions. I would just like to kindly point that part out, for Your consideration, to be changed in future announcements, in order to stay intact with the SNO community. To put it diplomatically.

Please don’t rush to answer.

Let the answers come in the coming weeks and months, as its become clear.
Would be really great, if generosity, would be involved too.
Thank You for Your attention.

2 Likes

I wouldn’t expect generosity from a business that has been in-the-red for years, and that’s running low on treasury tokens… but hopefully Storj is still on-track to become profitable in 2026. And any money Valdi is bringing in certainly helps!

The recent pricing changes were a great idea (to compete more on functionality: like Object Mount, and less on price)… but I still wouldn’t be surprised if SNO payouts get a haircut at some point. Maybe $1.25 storage and $1.50 egress? If they keep payouts the same… I think that would be generous.

2 Likes

Nope, I don’t want to see this discussion. Just don’t. Last time it started the same, with some suppositions, that degenerated in a huge topic and in the end the company put it into realty.
So, if they want to tweak the payouts in the future, just let them start the topic. I don’t want to see that history repeating.
I don’t want any more payout cuts.

4 Likes

No topic necessary either way. They can use payout rates to control the node (over) population. Payouts rates are still huge, there is excessive capacity and low per node utilization, so there is room for optimization.

Net result may be better resource utilization and higher payouts. Rates must go further down, no question about it in my mind.

3 Likes