Yeah unencrypted FTP is a bad thing for security footage… That’s why I am a bit surprised that camera’s actually support this.
In any case another docker container that easily combines the gateway and offers an FTP server interfaces would be great. So one doesn’t have to follow a tutorial to set it up but simply pulls the docker image and starts the container like:
docker run storjlabs/ftp-backend --bucket bucketname --access-token xyz --encryption-key 32oifj [or --config-file config.yaml] --ftp-username user --ftp-password pw
[something like that]
Challenge accepted. Would you be open to a docker-compose stack? All the components are available in separate docker containers that I’m sure can be made to work together.
Personally I don’t have any security cameras, I was just thining about easy deployment for end users.
A docker-composer stack should probably be something a system admin (even hobbyist) should be able to quickly set up, I guess.
Yes of course deployment for end user must be easy. From what I have seen, these people are “normal” guys, no tech experts in most cases and they need easy, affordable but reliable solutions. Setting it up must be like a one-click experience.
Not much except ducking and hiding. The main thing I did so far is fork the gateway to develop a Dockerfile with a non-interactive setup based on environment parameters. I just committed that change now.
I’ll have a look at those FTP S3 projects and see if I can just wrap this up this week. Thanks!
I’ve made some progress toward a single Docker setup for a Tardigrade-backed FTP server. It basically uses the same setup as the non-Docker setup I described earlier in the thread. It’s not complete yet, as I’m facing some kind of permissions issue with actually using the FTP server that I haven’t fully diagnosed. I thought I’d put it out in the wild anyway so others could see what my thought process is for this single-container setup.
The ideal outcome for this container is as follows:
Create a Tardigrade S3 gateway using the --non-interactive flag with inputs provided by environmental variables
Launch s3fs with the Tardigrade S3 bucket as its backend
Launch vsFTPd with the s3fs mountpoint as its root folder
At this time, the container does all of the above, but fails when I try to connect via an FTP client.
Standard disclaimer: I’m not really a software developer, I don’t play one on TV, and I didn’t stay at a Holiday Inn Express last night. I fully welcome any and all input to improve this Docker setup.
Great guide!
Do you ever upload any large (~100MB or so) files through s3fs? I have built the same tardigrade+gateway+s3fs setup but start having timeouts and upload failures on large files.
I haven’t tried files that large through the s3fs setup. I’ll check that out this week sometime. Are you able to successfully upload these files either using the native uplink or S3 gateway directly?
Maybe Storjlabs could develop something like or partner with Couchdrop for that:
To have a cloud FTP server that pushes the data onto Tardigrade, which would make Tardigrade act like a FTP server. Integrating with them should be possible: https://couchdrop.io/features/cloud-storage
They always were. I literally mentioned that in the post you quoted. But at the time they were struggling to pay their AWS bills. I’m pretty sure that’s no longer the case. They’re well into the launch of their paid service now. So it might be too late either way. But if you give up on a potential customer because they are already on S3… There won’t be many left.
I fear that the chances to convince an established Amazon S3 customer to switch over to a startup that had its product launched 1 year ago are generally low. Amazon is kind of a lock-in. Unless a customer has troubles with Amazon like you mentioned Wyze had. But as you are probably right the case study shows that these problems seems to be gone.
I am not sure but it might be more promising to focus on companies/use cases that are not (yet) on Amazon S3.
Well one of the huge Amazon advantages is that they do not offer just cloud storage but also a wide range of additional different cloud products that can work together more or less seamlessly. So it is not just about the price but about completeness of a solution.
And even though I don’t know much about Amazons pricing strategy, I would not be surprised if they do or at least would be able to heavily discount their S3 storage if required and if a customer also purchases additional services from them.
And of course trust is important for cloud providers and customers. Amazon is well known and established while Storj Labs is not only a start-up it offers an innovative product that sounds more like a threat for customer data than a protection: Your data gets encrypted, hacked to pieces and distributed worldwide without redundancy to consumer grade hardware owned by average people who are more or less unknown to Storj Labs.
I could imagine that this doesn’t exactly sound like the place where you want your valuable data to reside.
It has redundancy due to erasure encoding. But not a replication if it’s what do you mean.
It also has a SLA, so there are other metrics when you consider a durability of the service. The replication is a last thing, which would be interested. If you would offer a good price and speed with an acceptable SLA and also all my data is encrypted by default - I would think about it, especially if I have a lot of data to store and retrieve.
The only problem if I would like to have a computing too - then there is not so much reliable and proven decentralized compute solutions, unlike decentralized storage. And if I would use AWS EC2, then of course, I’ll think twice, because AWS have a very expensive egress traffic from their cloud.