I agree it would be great to do a project with the Internet Archive. There are also some other data archive projects that would be interesting to work with.
This is a large pro about doing something like this, essentially providing a somewhat production service that people can interact with that shows that the Storj network can store PB of data for a single client. That’s pretty solid for advertisement and demonstrations.
@super3 Thank you for the answer. I didn’t think about the value of a proof of concept as a way to convert developers and getting Tardigrade/Storj known to the masses. That is definitely a point!
I still hope not to many see it as a way to make to much backups.
I’m still scaling up my nodes so I’m still definitely on board. But still worried about the usage pattern when customers start using the network. Time will tell if my concerns will be right or wrong since I guess no one really knows. Even if I use hardware I already own, it’s still some work to monitor and keep everything up so if the network will consist of to much slow moving data (to little egress bw) I will exit. But that being said, I of course hope Storj succeed in onboardning lots of customers since we are in this together and I guess Storj have the same goal as me since we all want to make some money?
@moonshine I was always taught that you have a problem, and you are capable of fixing it, you should do it. You think high egress is important (I don’t disagree with you), so why not help us find early high egress customers or help in building a tool to support a high egress use case? We do a have a referral and partner program. You can earn much more helping us find early demand.
Has Storj ever reached out to them? According to Wikipedia their data size was around 18 Petabytes in 2014. I could imagine that they are desperately looking for cost efficient resiliant storage options at a large scale.
Yeah it sure isn’t. One has to be careful that PR projects don’t die once the PR is gone… an investigative customer will find those “corpses” and they don’t look good.
But maybe it’s just a bug somewhere and nobody noticed because you don’t check all projects every week?
But really this is no excuse. If you advertise such a service as showcase and as alternative in case of a Github outage or even censorship you always need to be up-to-date.
I mean what could be worse than a 4 months old backup? Ok, a 5 months old backup. But both probably are not really helpful.
Of course, it’s no excuse, this shouldn’t happen. I was just hoping it’d be a technical problem rather than abandoning old PR projects. Still doesn’t look good of course.
Main problem of Storj is poor backword compatibilitie. If you have client that work on some version, this shold supported for several year, but here we have as new version com, some old client stuff stop working as it was reworked and made with other stuff. But olld functions shod work for long time, only then Enterprice can take it. Here we sii how this problem is looks like. Same problem with FileZilla. No one developer can support new sdk very fast and relible. Each feature as it was relised shold be work for long time.
Note we don’t actually check all Github projects every week. The core goal was to get a full backup of every project, then work on keeping it up to date. We got up to about 2 PB so far.
We haven’t been doing too much work on this project lately because we are doing some product facing work. However we haven’t forgotten or abandoned it. We plan on working on a GitBackup 2.0, thats going to work much faster and better when we finish our current work stream. Stay tuned!