Is there a price reduction plan?

What do you mean by “only for the 64G segment size”? In the documentation is written that the maximum segment size is 64MB (and, if so, the calculation is correct and the minimum you will ever pay is above 0.004/GB).

In addition, I took a look on the avarge Storj user (can be done using this link: Grafana) segment size, and I notice interesting pattern, the avarge size getting lower and lower (which is make since, as more file upload to Storj we expect this will be the avarge file size).
Currently its on 8.65MB.

So for the avarge user, the current storage price now is:
116* 0.0000088 + 0.004 = 0.0050208

And by the trend (and by logic) this will be more and more expensive (since most of the avarge user files are smaller than 8.5 mb, documents, code, photos and more their avarge is lower then that).

About the georedundancy, its a nice feature, but, as let’s agree that as a bussiness I care about the fact Storj is durable enogh (11 9’s on 2 DC or 11 9’s geo-redandent are the same for me).

And I will explain why, as a business, no matter how manny 9’s you have (above some point), I will never trust only on you, I will always have backup.

This backup not mant to use as the main storage, but it there for failures that im sure will happened (file lost, service is down, you name it).

So in Backblaze I have a bit grater risk bacause their countries of DC will bomb or something and I will lost my data (this is ok, as a business you will have backup for such rare event) in storj more often the gateway will be slow or will return some error or other issue.
This makes no different since in both cases I will have a backup and in both cases the issues are kind of rare.

I will mantion in the end, that its nice to read how the conversation moved from: “this is not true, why would you say so” → “yes, this is true but intentionally you silly”.

Hello @Jakob,
Welcome to the forum!

The default segment size is 64MiB or less. If more users will use an S3 integration and their default chunk size (5MiB), then the size of the segment in an average would be lower and lower.
The segment fee is introduced to incentive customers to use a bigger chunk size - it works faster and costs less.

2 Likes

50000 Segments per month are included in the paid account.

With that the price per GB is as advertised.

Also looking at averages is always misleading and you don’t know which is paid and which is a free account. Free accounts have also 10000 segments included. Additionally small files and therefore small segments are usually existing in a much larger amount.
Having 10x10KB segments and 1x64MB results in an average segment size of around 6MB.

1 Like

Since there is no case in which the client will get 4$/TB (without free segments or other trick) I think that this price quate is misleading.

At the minimum it should be “Starts at:” and then the minimum price without free segments (its the same logic as not writing come and store for free since we have promo code for the first x gb).

For the segment, you assuming the user have big files and the only reason he has many segments is s3 default.
I make a counter claim that for manny use cases (that advertise in Storj as legitimate case) this is simply not the case.

The avarge file is considerably smaller for:
Photos, code files (.js, .html etc.), documnet, songs, video fregment and so on.

This mean, that for the avarge user the price going to be significantly higher then expected (and as advertised) especially compared the competition.

I get that you want to give incentives for big files, but, if your network is basically engendered for big files only you should state this loud and clear (and, just pointing out that you total market is significantly smaller then any notmal object storage since the majority of files size is indead smaller, and nope, conconate multiple files to one segment from the user side is not a normal expectation, if you want to do your magic behind the scenes is your bussiness, but for the UX is must be a normal as give me file x (the same as aws s3))

All prices are publicly available, I do not see a reason to artificially increase price for storage if it depends on the customer’s usage pattern. Take 2 minutes to configure a bigger chunk size and pay (much) less.
For customers who stores small files we have a pack feature in our roadmap:

The fact that Storj network prefers big files is mentioned multiple times across the site, and you may see that we have many sections related to video streaming and distribution use cases, also backups and other use cases where data volume is bigger than a single photo.
Please note - our target is not consumers, but Developers, DevOps and Enterprises. Consumers may use Storj too, but they will miss a lot of consumer features (which should be implemented in consumer applications by third party Developers), include efficient storage usage for tiny objects (the usual consumer’s use case).
Right now you may use restic or duplicacy to store tiny objects more effectively.

7 Likes

Out of curiosity, was it considered by Storj Inc. to not count per-segment fees for segments of, let say, at least 60 MB? I would guess this kind of arrangement would make it more worthwhile for customers to attempt packing segments better.

1 Like

It’s done in a little bit different manner. Each customer has a free tier coupon ($0.37), it can cover costs of storing 25GB data, 25GB/mo egress traffic and 10,000 segments. So you may have any combination of usage, i.e. 90GB storage, 0GB/mo egress traffic and 1,445 segments by 64MiB for free, or 50GB storage, 10GB/mo egress traffic and 10,360 segments by 5MiB for free, or 0.04TB storage, 0.02 TB/mo egress traffic and 7,215 segments by 5MiB for free, or 0.01 TB storage, 0.04 TB/mo egress traffic and 8,325 segments by 1MiB for free, and so on.

Free tier user have more limits in variations though, because it’s limited to 25GB storage, 25GB/mo egress traffic and 10,000 segments anyway. So it can store either 10,000 segments by 1MiB and have a storage only 10GB and will not be able to upload more and still have 25GB/mo egress traffic. But if they would use at least 5MiB segments, then they would be able to use all available storage.

2 Likes

By the way, if we already mentioned geo-redundancy, Storj data is obviously decentralized across a lot of nodes (and we ignore for a second the possibility of 1 big noes owner that has a serious effect on this decentralization), but, what about the satellites that store the metadata?
Are all of them as well geo-redundant and decentralized? assuming we have a GCP issue, will Storj continue to operate as usual?

Thanks for the reply, do you have any ATA on when this implementation for small files will be in prod?
In addition, assuming I store small files today when the fix is launched, do I have to reupload my files to get a better price or will it be automatically for all the files?
Last question, do you think that anywhere in the next few months Storj will drop the segment fee?

See this comment and the corresponding thread for answers:

2 Likes

You can subscribe to the roadmap to receive updates.
I would guess if we would have more big customers who wants this feature, it would be implemented sooner.

I do not have implementation details, but accordingly linked blueprint (https://review.dev.storj.io/c/storj/storj/+/6543) I would guess that it’s expected to be a background satellite process, but also there a propose to implement the packing in the uplink to do that from the start. So, if you would like to reduce your costs immediately you likely will need to reupload and do not wait for finishing of the background job (it could take a lot of time, but 48h as a minimum).

why they should do so? It’s still more expensive to store small pieces, because they likely stored as inline segments (I.e. in the satellite’s quitely expensive distributed database). Small segments also have a speed impact - both uploads and downloads, the packing likely improve only price, but unlikely the speed (I could be mistaken though - we can check this after the implementation).

Right now the best option would be to use restic and their restic mount command to get an access to your objects on the fly as a static content for your site.

1 Like