Hello. I tried many months before to move my archieve (6TB) to Storj. I am currently use a storagebox frm hetzner but I am not happy with it.
In the front page you say that the price is 4dollars per month. But I am afraid of the segment fee. I have read the faq and the detailed explanation of how the segment billing works, but still as an amateur user its too difficult to understand.
So, lets say one folder has files 36GB. I archive the folder with winrar and split the arxhice in 4.0GB parts. If i do this for all my files (6tb) what will be the cost of segment fee? Will the per TB price get pricier?
Also do you permit cold storage? Sorry for my english and thnx in advance for your answer.
I have uploaded 110GB and the segments already are too high.
We enabled a monthly per segment fee for all users. The per-segment fee is $0.0000088 per segment. This fee will not affect most of you. To keep the per segment fee nominal at scale, we suggest packing small files into 64MB or greater.
Check out our docs to get more information on how objects are broken down into segments.
Files less than 64MB are one segment. A 1GB (1024MB file) is 16 segments.
Here is an example of what the per segment fee would look like depending on how much data you store on the network
100,000 segments = $0.88
1,000,000 segments = $8.80
To ensure you don’t accidentally upload too many segments, we added per project segment limits. Just like storage and bandwidth limits, you can alway request higher limits or upgrade your account.
How you use the storage is up to you, but node operators will be more happy if the storage is active since they get paid for outgoing traffic
If your segments are already too high, than there should be a mistake. Maybe contact support about that. If your files are larger than 64MB you should not have issues with the segment limit. Only if your files are smaller than 64MB it could be an issue, as each file will be a segment.
For example:
10 images á 10 MB are 10 Segments.
1 Archive á 100 MB is 2 Segments.
Also i would consider to use duplicati for backups, it has a nice integration, and create own files for the backup, where you can define the size of it (in storj case 64mb). This way you are completley unindependend from the amonut of real files and ist sizes.
Are you uploading directly to nodes without using the gateway? If so then you will be uploading your data with the expansion factor for Reed-Solomon (2.75).
However, you may reduce the amount of chunks with increasing a minimum size of the chunk to 64MiB, and at least 64MiB of the maximum size (greater - more used memory, but can be faster on upload):