I have a client who is an academic publisher, and as a consequence has thousands of fairly large PDFs representing articles in the various journals that they publish.
Currently, they’ve been hosting everything on the same server, but are now hitting disk space limits and so have asked me to look at alternatives. They’re on AWS and so one option is S3 storage - however, I’d like the chance to push them towards using Tardigrade for this instead.
I’ve got the basic mechanics working in terms of file upload & can see that we can use access rules to get a publicly accessible URL i.e.
However, the path between the domain and the bucket name seems to constantly change from file to file which means there’s no easy way of being able to update the site so that /uploads/test-journal-pdf.pdf becomes something based on Tardigrade.
Is there any easy way of knowing what the access URL will be without having to individually create an access request for every single one of thousands of PDFs, and storing these in a DB somewhere? Ideally, a consistent URL that will access a specific bucket is what I’m looking for.
Apologies if this has been covered elsewhere, but I have searched the forum and can’t find anything.
PS: I know another potential options is to mount the bucket on the server (I’ve seen mention of this, but again no guidelines) but this is not ideal because we’ll then be paying bandwidth twice - for Tardigrade, and for downloading from the AWS server. I’d rather go direct if possible