Has Storj ever looked into the cctv market?

Yeah unencrypted FTP is a bad thing for security footage… That’s why I am a bit surprised that camera’s actually support this.

In any case another docker container that easily combines the gateway and offers an FTP server interfaces would be great. So one doesn’t have to follow a tutorial to set it up but simply pulls the docker image and starts the container like:
docker run storjlabs/ftp-backend --bucket bucketname --access-token xyz --encryption-key 32oifj [or --config-file config.yaml] --ftp-username user --ftp-password pw
[something like that]

1 Like

Challenge accepted. Would you be open to a docker-compose stack? All the components are available in separate docker containers that I’m sure can be made to work together.

1 Like

However it is there for a reason. There is probably no point in making things more complicated than they need to be at the beginning.

Personally I don’t have any security cameras, I was just thining about easy deployment for end users.
A docker-composer stack should probably be something a system admin (even hobbyist) should be able to quickly set up, I guess.

@super3

Yes of course deployment for end user must be easy. From what I have seen, these people are “normal” guys, no tech experts in most cases and they need easy, affordable but reliable solutions. Setting it up must be like a one-click experience.

@fmoledina: Any results so far?

I don’t know if these approaches can be of any help, but surely others have already thought of getting data onto AWS S3 by FTP:

https://github.com/democracyworks/s3-ftp

1 Like

Not much except ducking and hiding. The main thing I did so far is fork the gateway to develop a Dockerfile with a non-interactive setup based on environment parameters. I just committed that change now.

I’ll have a look at those FTP S3 projects and see if I can just wrap this up this week. Thanks!

3 Likes

That is amazing! :grin:

All,

I’ve made some progress toward a single Docker setup for a Tardigrade-backed FTP server. It basically uses the same setup as the non-Docker setup I described earlier in the thread. It’s not complete yet, as I’m facing some kind of permissions issue with actually using the FTP server that I haven’t fully diagnosed. I thought I’d put it out in the wild anyway so others could see what my thought process is for this single-container setup.

The ideal outcome for this container is as follows:

  • Create a Tardigrade S3 gateway using the --non-interactive flag with inputs provided by environmental variables
  • Launch s3fs with the Tardigrade S3 bucket as its backend
  • Launch vsFTPd with the s3fs mountpoint as its root folder

At this time, the container does all of the above, but fails when I try to connect via an FTP client.

Standard disclaimer: I’m not really a software developer, I don’t play one on TV, and I didn’t stay at a Holiday Inn Express last night. I fully welcome any and all input to improve this Docker setup.

10 Likes

Great guide!
Do you ever upload any large (~100MB or so) files through s3fs? I have built the same tardigrade+gateway+s3fs setup but start having timeouts and upload failures on large files.

1 Like

I haven’t tried files that large through the s3fs setup. I’ll check that out this week sometime. Are you able to successfully upload these files either using the native uplink or S3 gateway directly?

It hadn’t occurred to me to try… :sweat_smile:
I’ve since nuked my test VM but I’ll give it a go again sometime soon™.

Maybe Storjlabs could develop something like or partner with Couchdrop for that:

To have a cloud FTP server that pushes the data onto Tardigrade, which would make Tardigrade act like a FTP server. Integrating with them should be possible: https://couchdrop.io/features/cloud-storage

Too late, they are on Amazon S3 now:

They always were. I literally mentioned that in the post you quoted. But at the time they were struggling to pay their AWS bills. I’m pretty sure that’s no longer the case. They’re well into the launch of their paid service now. So it might be too late either way. But if you give up on a potential customer because they are already on S3… There won’t be many left.

2 Likes

I fear that the chances to convince an established Amazon S3 customer to switch over to a startup that had its product launched 1 year ago are generally low. Amazon is kind of a lock-in. Unless a customer has troubles with Amazon like you mentioned Wyze had. But as you are probably right the case study shows that these problems seems to be gone.
I am not sure but it might be more promising to focus on companies/use cases that are not (yet) on Amazon S3.

Dunno… it’s all about the money, I suppose.
An AWS-established company may well be persuaded to move if their storage costs suddenly drop by 90%

The problem is establishing trust in the Storj platform. Bit of a chicken and egg thing…

1 Like

Well one of the huge Amazon advantages is that they do not offer just cloud storage but also a wide range of additional different cloud products that can work together more or less seamlessly. So it is not just about the price but about completeness of a solution.
And even though I don’t know much about Amazons pricing strategy, I would not be surprised if they do or at least would be able to heavily discount their S3 storage if required and if a customer also purchases additional services from them.

And of course trust is important for cloud providers and customers. Amazon is well known and established while Storj Labs is not only a start-up it offers an innovative product that sounds more like a threat for customer data than a protection: Your data gets encrypted, hacked to pieces and distributed worldwide without redundancy to consumer grade hardware owned by average people who are more or less unknown to Storj Labs.
I could imagine that this doesn’t exactly sound like the place where you want your valuable data to reside.

3 Likes

It has redundancy due to erasure encoding. But not a replication if it’s what do you mean.
It also has a SLA, so there are other metrics when you consider a durability of the service. The replication is a last thing, which would be interested. If you would offer a good price and speed with an acceptable SLA and also all my data is encrypted by default - I would think about it, especially if I have a lot of data to store and retrieve.
The only problem if I would like to have a computing too - then there is not so much reliable and proven decentralized compute solutions, unlike decentralized storage. And if I would use AWS EC2, then of course, I’ll think twice, because AWS have a very expensive egress traffic from their cloud.

2 Likes

Yes you are right, replication I meant.

I would think about it too because I like the general idea. But we are both not the target group. And for the target group like big companies with hundreds of TB to store, I am not so sure if they are willing to experiment much. for example just recently Star Alliance has chosen Amazon:
Star Alliance plans to create a data lake on Amazon Simple Storage Service (Amazon S3) that will centralize data access for member airlines to accelerate the development of enterprise applications and customer features.

But of course I don’t have insights. Maybe Storj Labs is getting already overwhelmed by large customers and we just did not notice. I have already asked if some insights on this topic can be shared, but was left without reaction.

Wasabi is doing it:

Demo video:
https://info.wasabi.com/wasabi-surveillance-cloud-product-demo-video