I just realised how expensive object storage is also Tardigrade

And also a free plan like 2 GB shall be a option

1 Like

I think the biggest issue with Tardigrade pricing is that you have a single service with non-adjustable parameters.

Let me explain it more detailed:
In this thread we were looking for a good backup solution for personal data. One could argue that storing on Tardigrade is worth the additional costs because of decentralization and privacy and suchā€¦ But ultimately, personal users will mostly care about costs. They might even store their data on 2 different backup systems just in case one of such systems fails. So they need it as cheap as possible but not neccessarily fully decentralized (and e.g. amazon glacier is even storing 3 times redundant).
Also I can easily encrypt my private backups, so privacy is not a concern and nobody is able to data mine my backups.
So competing in the personal backup market wonā€™t really work for STORJ because you canā€™t beat 10$ for 2TB on dropbox with no ingress/egress fee or backblaze with 0.005$/GB/month with 1GB egress per day free and 0.01$/GB egress. Additionally they keep file versions up to 30 days which would make the price for storage even cheaper.

If you aim for a different usecase that needs higher speed or better geographical availability, the egress fees seem alright. However, since the data is still stored close to the uploader because those nodes will win the race for upoads, you currently canā€™t really achieve a CDN like spread to various (or intended) geographical locations unless you upload data from that region.

So my conclusion is that Tardigrade certainly has its target audience but the lack of ā€œcustomization optionsā€ makes it difficult to be used by a broader audience and excludes many use-cases. This may be totally fine for STORJ and work well, even though itā€™ll probably make 90% of the SNOs not use STORJ.

4 Likes

I am completely agree with you on this point, thanks for pointing out. My another point (added after editing, sorry) is that being lack of insurance, which might be solved somehow though.

1 Like

so yeah long story short, people that doesnā€™t needs super high bandwidth, super low latency and super high capacity data storage over vast geographic areas, most likely donā€™t have a use case for tardigrade, because thatā€™s basically what it enable one to get because of how the technology works.
and ofc that comes as a costā€¦ which those that has that kind of demands are most likely more than willing to payā€¦

alas iā€™m sure we will hear much more about what is going on tomorrow at the town hall, since this is the first one after the tardigrade launchā€¦

(reasoning and ranting commences from hereā€¦)
thinking of storj as a consumer backup is a fallacy, it is a very limited case of consumers and maybe even prosumer that has the need to access their data/backup from wide geographic areas at speeds upwards and over 10 -100gbit ranges.
imo itā€™s not about making it accessible to the consumer market, even if that is where it will be used in the futureā€¦ but for now itā€™s a matter of focusing on making tardigrade useful for hyperscale enterprise use cases, where other solutions might barely even existā€¦ such as that new super radio telescope being built in Australia, the square kilometer arrayā€¦

itā€™s expected to produce more data a year than the size of the entire current internet which will be required to be accessed from across the globeā€¦ if one imagines a contract with that project and then put the cost of accessing the data onto the scientists utilizing the data, could make it a possible job for tardigrade.

itā€™s not easy to imagine what tardigrade could be used for exactly, but trying start in the consumer market is like inventing the steam engine and then trying to sell cars to people instead of making trains firstā€¦

advanced technology is very much a trickle down thing, and the real money is made at the top of the pyramidā€¦ nobody wants to be a bottomfeeder, even if most of the megacorporationā€™s basically evolve into that state as they ageā€¦ but thatā€™s not because itā€™s a good market, but because itā€™s easy to make profit from scaling stuff you already understand and with inability to adapt and evolve upward thats where they go to dieā€¦

but getting to support projects like the square kilometer array, is very much like climbing a mountain, one first needs a base camp (core test user base) then a couple more on the way up, getting more and more demanding users, until one moves into the deathzone, where nobody really survives for long, but is where the true achievements are made.

and then as the technology and itā€™s use cases permeate the world, it will become something every consumer takes for granted and pays next to nothing for, because of the ridiculous scale the hyperscale projects of the world have taken it toā€¦

its not may decades ago that yahoo was a freaking static human written url link based website before the webcrawler and search engine concepts where developedā€¦

(end moved to the top) TL;DR reasonsā€¦ lol

2 Likes

I am really not trying to argue with you, but IF Tardigrade is not a universal solution but for big guys, hyper scale enterprise, who has a lot of money then maybe I have been a complete moron who wanted to replace Dropbox with Tardigrade. I would like to beg you guys to correct me if I am wrong but since when did Storj become a startup which wanted to hunt big guys with big money? I have been following the project quite before the ICO craze and now I am bit shocked that Storj and Tardigrade has not been targeted for retail consumers. Maybe I totally have misundetstood the concept here from the start. Sure you can wait for trickle down but you donā€™t have to wait for something that may or may not be available in the futureā€¦

1 Like

i donā€™t really think itā€™s a matter of who Storj wants to service, but a matter of what advantage and disadvantages that a technology has, which decides who the user base will beā€¦

as an example, with Tardigrade your data will most likely be stored on spinning harddrives that are running 24/7. hot data storageā€¦ this means you can request and start to receive your data in less than 100ms, half a blink of an eyeā€¦ as compared to it being stored on harddrives that are offline and then when there is a data request they will be spun up and start sending the dataā€¦
this gives your data a 1-2sec delay from when you request it until you start to receive it.
this is cold storage and that doesnā€™t require electricity while being stored or very minimal.

so in that case the trade off is do you want to pay for hdd wear + electricity while you are not using your data so that you can access it 0.9sec - 1.9sec faster vs cold storage where itā€™s basically just the space used and the value decrease of a hdd you is paying for while storing itā€¦

the difference might seem trivial but that 0.9sec- 1.9sec delay makes the storage of the data cost a multiplication of many timesā€¦

lets do some math xD
lets say the hdd devaluation and purchase cost are irrelevant because they apply equal to both situations, likewise lets say the overhead, such as server running costs are irrelevant.

lets say a SNO is running a 3TB hdd, that requires something like 5 watts to be active, then your data is expanded so it takes up nearly 3 times the capacity, so a 3TB hdd holds 1TB for 5watts

lets call that a 1 to 1 ratioā€¦ so 1TB stored equals 5 watts of continually electricity usage.

and something like lets say 1/3 (because easy math) of a pr kwh, so you get 3kwh for 1
which is equal to 3000/5 hours = 600 hours of run time for 1$ and there are about 720 hours in a month
so it ends up costing 1.2$ pr month to save 0.9sec to 1.9sec when you first request your data.

if in example you wanted to backup that 1TB for 3 years which seems reasonable for a backup these days, atleast comsumer wiseā€¦ and if you only download it once, then you will have paid 44$ to saved a secondā€¦ while cold storage would had been no cost at allā€¦ aside from the equal overhead of hot vs cold storage.

i duno if you have a use case for tardigrade, iā€™m sure some people will, but there will be some inherent advantage and disadvantages, which some users will pay a premium forā€¦ nobody wants to add extra costs for features they donā€™t needā€¦

donā€™t even get me started on the extra costs of transferring data across the entire world, instead of just storing it on your local computerā€¦ xD
alas there are advantage and disadvantage to bothā€¦ its just a matter of which ones you are looking for.

1 Like

i dont even know what the prices of tardigrade are, but no pro will ever choose to use tardigrade over s3 given it has less than 1% of what you can do with s3+cloudfront, with the baseline there being serverless hosting of websites. not to mention the plethora of options around replication, object retention tiers, etc. i still see the forum being filled with issues around bucket creation and parallel up/downloads of objects.

furthermore, if its anywhere close to 10$ for 1TB per moth excl. in/egress, then also home users wont use it, because you can get a lifetime sub for 5TB with unlimited in/egress at Polarbackup:

so besides the hobbyist aspect of decentralized storage for the moment, plus any future IPFS integration plans and web3 kicking off say 5 years from now, I dont really see any applications for Tardigrade.

glacier has been just as hot as s3 for the past 2-3 years with retrieval time being reduced from 4 hours to 5 minutes. plus it now integrates directly into s3 and is part of the s3 objects availability tier flags

Backups that are not regularly tested are not backups.

2 Likes

Maybe Iā€™m missing something, but my understanding is that satellites keep a record of which nodes have which pieces, and youā€™d need to consult the satellite to know where your data is stored to retrieve itā€¦ and satellites are centralized. So a satellite outage would have the same effect, no?

1 Like

Currently it is like you describe - the satellites are a single point of failure, currently. But if you take a look at the roadmap, it is the plan to change that and make satellites more ā€œdecentralisedā€. They will never be as decentralised as the nodes themselves, though.

1 Like

@TopperDEL is correct about our roadmap plans for the more decentralization in the future. One of the features we would like to implement later this year is the ability for users to export their meta data from the satellite they are currently using and import it into another satellite. The users would also be able store their own backups of the metadata incase something catastrophic were to happen to a satellite.

4 Likes

I know youā€™ve already discussed volume discounts with customers. I think the current pricing is fair for relatively low volume customers. But it would be nice to offer volume discounts right on the page so potential customers know what they would pay if they use much more than that. Target being competitive with Backblaze B2 for those higher volumes and youā€™re golden.

It may not be entirely possible to do that without paying SNOs less. But hopefully the scale this would bring would compensate for most of the price difference. And I think paying SNOs less for egress but them getting more egress as a result could still work out as a positive on both sides.

5 Likes

You can always buy a 2.5" HDD and backup your 100GB data.
Even USB sticks now have 256GB

It seems that today BackBlaze has entered the S3 market with some very keen pricing.

1 Like

Interesting. But I find it difficult to compare. E.g. you have to pay for specific commands like list-buckets. That might get expensive (depending on the usage, though). Whats not included is the built-in decentralisation and encryption. I Do Not See where the data gets stored and if there are any specific security-preparations in place.

Even though Storj is more expensive, I value what I get for that extra price (that is still lower than AWS S3 and others).

2 Likes

Just to clarify, B2ā€™s pricing has not changed. They have only added S3 API compatibility.

1 Like

This refers to their automated backup service, which isnā€™t pretty (they limit bandwidth and bulk-restore options are limited). What you probably wanted to link is Cloud Storage Pricing Comparison: Calculate Your Costs (5 USD/TB stored/month, 10 USD/TB of egress).

1 Like

Since @super3 suggested some feedback. Here are my thoughts:

If we consider Tardigrade to be a hot object storage (rapid accessibility and multiple downloads):

  • the storage should be semi-expensive to accommodate for multiple POPs (say $20-25/TB)
  • the egress should be cheap (preferably on CDN level) to encourage usage (say $5-7/TB).

If we consider Tardigrade to be a cold object storage (rare downloads):

  • the storage should be cheap to encourage data uploads
  • the egress should be expensive

As of right now Tardigrade seems to follow the 2nd option. I would never use a service that charges $45 per 1TB of egress for CDN or even any sort of file sharing service. My intuition tells me that Tardigrade is a backup place right now.

As a storage operator, Iā€™d rather see Tardigrade as a CDN and make sweet money from files downloaded hundreds if not thousands times per week.

1 Like