Limit file upload

Hi everyone!
Me and a couple of friends are currently creating a project where we have a type of permissionless uploading of a specific type of file, say under a certain size threshold (1 or 2 MB).

We plan to do this uploading through the S3 gateway, where we will supply a public access grant which can only download and upload files. For the purposes of this project it is absolutely necessary to not have a server, rather than, everything should happen on the client side.

These user files will be placed in a folder or bucket that have a unique identifier for each user which we obtain through other means, so a user can easily access their files because they already know their identifier but it would be difficult for them to access other folders because of the uniqueness of this identifier.

I hope I explained the gist of the system. My question is , can you suggest ideas or mechanisms that could help us circumvent some of the issues with this approach? For example, limit the size of the files that can be uploaded through caveats, a way to avoid brute force attacks and so on…

If in order to accomplish these we need to run some sort of a server (absolutely last option if nothing else can be done), the only thing we can do is have a small program running that can abstract the access grant and do something with it, like giving out temporary access, blacklist repeated accesses, and so on. But the aforementioned unique user identifier and their file should under no circumstance ever reach the server, they should always remain on the client side only (and the file eventually reach Storj directly from the client).

I hope all of this makes sense,
Thanks in advance!

1 Like

Hi everyone!
Me and a couple of friends are currently creating a project where we have a type of permissionless uploading of a specific type of file, say under a certain size threshold (1 or 2 MB).

We plan to do this uploading through the S3 gateway, where we will supply a public access grant which can only download and upload files. For the purposes of this project it is absolutely necessary to not have a server, rather than, everything should happen on the client side.

Doesn’t seem like Uplink or the S3 Gateway support this functionality.

Thinking out-of-the-box, you could run a job periodically to remove all files which are in violation.

If I read about the multitenant gateway it seems authentication can be delegated to an auth service that you implement yourself. And maybe you can implement such checks there. But I’m not sure if the file size is supplied with the auth request.

2 Likes

Use the File API to get the size of the file before sending it.

I would use maybe AWS Lamda or similar to have a small server side to not expose the access grant in the client. Since, when you have that, you can anyway upload anything circumventing all your checks.

1 Like

There is no such functionality as limiting the file size or its type on the backend. You can use different access grants for each user and allow them to use only own bucket/prefixes.
But you can configure rclone and use rclone sync with filtering on client side: Rclone Filtering
However, you need to limit an access to changing of rclone configuration or/and that script for syncing. Users will work with local filesystem, but only proper files will be synced with the cloud.

2 Likes

Thank you, I’ll look into this :slight_smile:

2 Likes