Need help setting up Mastodon with Storj on a custom domain

Here’s my mastodon .env config:


and here’s my nginx config for

server {
	include snippets/fu.conf;
	root /home/mastodon-fu/app/public/system;

	keepalive_timeout 30;

	location / {
		try_files $uri @s3;

	set $s3_backend '';

	location @s3 {
		limit_except GET {
			deny all;

		proxy_set_header Host '';
		proxy_set_header Connection '';
		proxy_set_header Authorization '';
		proxy_hide_header Set-Cookie;
		proxy_hide_header Access-Control-Allow-Origin;
		proxy_hide_header Access-Control-Allow-Methods;
		proxy_hide_header Access-Control-Allow-Headers;
		proxy_hide_header x-amz-id-2;
		proxy_hide_header x-amz-request-id;
		proxy_hide_header x-amz-meta-server-side-encryption;
		proxy_hide_header x-amz-server-side-encryption;
		proxy_hide_header x-amz-bucket-region;
		proxy_hide_header x-amzn-requestid;
		proxy_ignore_headers Set-Cookie;
		proxy_hide_header Content-Disposition;

		proxy_pass $s3_backend$uri;
		proxy_intercept_errors off;

		proxy_cache CACHE;
		proxy_cache_valid 200 48h;
		proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
		proxy_cache_lock on;

		expires 1y;
		add_header Cache-Control public;
		add_header Access-Control-Allow-Origin '*';
		add_header X-Cache-Status $upstream_cache_status;

i generated the share URL using uplink share --url --register --public --readonly=true --disallow-lists --not-after=none sj://freakuniversity. and I generated the access token and secret key on the Storj website.

Here’s what’s going on now: I can successfully upload files, and the object count in the Storj UI increases, but on the images 404 always. And when I use uplink ls sj://freakuniversity only a test file that I uploaded manually shows up. What’s going on here?

1 Like

One thing to check is if it is working without a custom domain with link sharing first. That wouldn’t require to include anything in your nginx config with

So you would change your S3_ALIAS_HOST to

Curious if you had that working previously before moving to having the media hosted from your custom domain?

And when I use uplink ls sj://freakuniversity only a test file that I uploaded manually shows up.

uplink ls can only decrypt the file paths that match the encryption passphrase. So what is likely is that the encryption passphrase you typed when generating the s3 credentials doesn’t match the one you’re running uplink ls with. You could try generating a new access grant and setup uplink again.


“Oops! Object not found.”

Hmm, can you double check that the linksharing key (the part between the raw and bucket name, e.g. from the output of

uplink share --url --register --public --readonly=true --disallow-lists --not-after=none sj://freakuniversity

matches the one that was set for the S3_ALIAS_HOST?

Looking at this a little closer, I think the problem is likely related to uplink ls not showing the files which means the encryption passphrases don’t match.

So for example, if I did the following when I set up my environment

s3-credentials passphrase: secret2
uplink-access-grant passphrase: secret

I wouldn’t see any files from the s3 creds with uplink ls and the linkshare would say “object not found”. The reason being it can’t decrypt the file.

So what you’ll need to do is make sure the passphrases are the same, which may mean you need to generate a new access grant for uplink.

uplink-access-grant passphrase: secret
s3-credentials passphrase: secret

Then the share generated by uplink can decrypt the files uploaded with the s3 credentials.

Side note: uplink supports multiple access grants, so you could do something like uplink share --access <new access> ... when making a new access grant.