Me and a couple of friends are currently creating a project where we have a type of permissionless uploading of a specific type of file, say under a certain size threshold (1 or 2 MB).
We plan to do this uploading through the S3 gateway, where we will supply a public access grant which can only download and upload files. For the purposes of this project it is absolutely necessary to not have a server, rather than, everything should happen on the client side.
These user files will be placed in a folder or bucket that have a unique identifier for each user which we obtain through other means, so a user can easily access their files because they already know their identifier but it would be difficult for them to access other folders because of the uniqueness of this identifier.
I hope I explained the gist of the system. My question is , can you suggest ideas or mechanisms that could help us circumvent some of the issues with this approach? For example, limit the size of the files that can be uploaded through caveats, a way to avoid brute force attacks and so on…
If in order to accomplish these we need to run some sort of a server (absolutely last option if nothing else can be done), the only thing we can do is have a small program running that can abstract the access grant and do something with it, like giving out temporary access, blacklist repeated accesses, and so on. But the aforementioned unique user identifier and their file should under no circumstance ever reach the server, they should always remain on the client side only (and the file eventually reach Storj directly from the client).
I hope all of this makes sense,
Thanks in advance!