Object storage provider for Mastodon instance

I’d like to start this off with a question, is here anyone who has actually managed to make this connection work?

Progress of the current situation. No matter what I’ve done, I still haven’t managed to get it working. So, I waited a day for a friend of mine, who has more experience with servers than I do, to make sure it wasn’t by my hands. :smiley: And I will tell our conclusions.

I have the whole server installed fine, the only thing that still doesn’t work is the connection to s3.
After countless unsuccessful attempts to get MT Gateway working, with no successful result….


We have decided to give self-hosted ST gateway a try. However, the result was mostly the same. :slightly_smiling_face:


When using aws-cli the ST gateway works and lists buckets without any issues even with NGINX reverse proxy (from localhost:7777)
We set up the ST gateway using an access grant.
However, regardless which config option we changed (S3_ENABLED, S3_HOSTNAME, S3_PROTOCOL) or which gateway we used (Hosted MT / Self-hosted ST) the response from Mastodon was always the same → Aws::S3::Errors::InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records
The only difference was with self-hosted ST gateway and omitted S3_REGION parameter when Mastodon added extra “S3 client configured for “us-east-1” but the bucket “mastodon” is in “us-east-2”; Please configure the proper region to avoid multiple unnecessary redirects and signing attempts” to ever the same Access Key does not exist.
Nov 27 01:48:40 ns102719 bundle[1176]: S3 client configured for "us-east-1" but the bucket "mastodon" is in "us-east-2"; Please configure the proper region to avoid multiple unnecessary redirects and signing attempts

However, there is no option to configure region when using the access grant setup. (I thought that using a custom gateway shouldn’t work with regions at all, but apparently it still requires it somehow.)

If we tried to play system’s game and slip the region, all that happened was the message disappeared, but the transfer still won’t work.

We’re currently stuck here and can’t go any further, which is why I asked at the beginning if anyone had managed to do this with by the self-hosted gateway.

I have to admit that if I am not a StorJ supporter, selecting one of the preset services at install wizard would be much faster. My friend and I spent almost a whole day on it and still didn’t get a workable result. :grin: I hope we will figure it out…

1 Like

This is mean that this plugin ignores provided endpoint and tried to use AWS S3 instead. You need to provide a hostname with http://

Let’s experiment further.
Try to use a GatewayMT again with one additional option - S3_PERMISSION = "private"

S3_ENABLED = "true"
S3_BUCKET = "mastodon"
S3_ENDPOINT = "https://gateway.storjshare.io"
S3_HOSTNAME = "gateway.storjshare.io"
S3_PERMISSION = "private"

If it still would try to use an access objects URLs like http://gateway.storjshare.io/mastodon/cache/media_attachments/files/109/394/283/839/423/575/original/6f8d30f905e6ff79.jpeg, then you can also register your bucket as a static website. In that case

S3_HOSTNAME = "www.domain-for-bucket.tld"


S3_ALIAS_HOST = "www.domain-for-bucket.tld"

For working with the GatewayST it would be nice, if that plugin could accept the option to switch domain-style to path-style, something like AWS_USE_PATH_STYLE_ENDPOINT=true, otherwise you will need to add a rewrite rule to your NGINX to convert http://my-bucket.domain.tld/folder1/folder2/picture.jpg (which is expected by this plugin) to http://domain.tld/my-bucket/folder1/folder2/picture.jpg (which is expected by GatewayST).

As turned out, they have!

1 Like

When you set up and the environment, you should be very careful to keep the number of files to be uploaded as low as possible, otherwise the instance would upload all files including a bunch of tiny files used for caching, and the usage charge would skyrocket. For a different project I once stupidly used rclone to sync local directories whose subdirectory had multiple node_modules/ for npm, and the fee was going to be really massive (thanks to contact from sales I successfully removed them before I was actually charged, and replaced rclone with duplicati).

Before proceeding, I would recommend you to have a look at https://www.storj.io/blog/february-2022-product-update and “Per Segment Fee Calculation” on https://docs.storj.io/dcs/billing-payment-and-accounts-1/pricing/usage-limit-increases/, check how many and which files you need to upload, and calculate how much you will be going to be charged within your tier.


So, I’m reporting status again. Unfortunately, there’s not much time today, so we’ve only managed to check the first option so far.

We tried connecting to GatewayMT again, but still no success. Same problem. We tried putting in other parameters, nothing. The official Mastodon documentation only works with these possible records.


There is no issue with the key or rights. We tried to make our own uplink yesterday, outside of mastodon, and sent a file to StorJ through it successfully. But not combined together, it doesn’t work out of the box.

We still want to try the other option, but I’ll have to wait until he gets back home from work for that. I just wanted to inform you about the fastest option that which we could have done right now. In a spare time.

Thank you for letting me know.
I don’t think it’s related to access rights or something.
The problem that GatewayMT doesn’t support a globally public buckets, it’s private by nature.
I suppose you can try to configure the static website for this bucket and provide the domain name of this site in the S3_HOSTNAME option, but you also need to change the protocol in the S3_PROTOCOL option to http.

I have tested this for my own Mastodon instance too.

Although I didn’t get to the point of actually configuring it (just made some tests of DCS acting as a public S3 bucket), I found the load times through the share service quite slow for end users compared to other solutions.

That makes sense due to the decentralized nature of the service, and can be solved putting a nice CDN with cache in front of it, but you need to take care of that.

Edit: quite slow == 200ms of load time without cache for a picture of ~100Kb. Not too bad but on an scenario of a timeline loading lots of media, it can be noticed.

We have ways to optimize performance. Please have a look at How to Hod Rod File Transfer Performance on Storj DCS - Storj DCS Docs


You also may configure nginx to be a cache proxy too, something like:

proxy_cache_path /tmp/nginx_mastodon_media levels=1:2 keys_zone=mastodon_media:100m max_size=1g inactive=24h;

server {
    listen 80;
    listen [::]:80;
    server_name media.your-site.tld;
    return 301 https://media.your-site.tld$request_uri;

    access_log /dev/null;
    error_log /dev/null;

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name media.your-site.tld;

    access_log /var/log/nginx/mastodon-media-access.log;
    error_log /var/log/nginx/mastodon-media-error.log;

    # Add your certificate and HTTPS stuff here

    location /media/ {
        proxy_cache mastodon_media;
        proxy_cache_revalidate on;
        proxy_buffering on;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_cache_valid 1d;
        proxy_cache_valid 404 1h;
        proxy_ignore_headers Cache-Control;
        add_header X-Cached $upstream_cache_status;
        proxy_pass http://bucket-domain.tld/media/;

You likely will need to configure CORS too.

We’ll be happy if we can get it together at all. Then we’ll look at optimization options. We haven’t gotten close to linking yet, just separately. He’s off the job again today, so we’ll try up again to see if we get anywhere. :thinking:

That mostly looks at optimizing larger transfers. I don’t think there is a lot you can do about time to first byte other than caching.

Hi Alexey, thanks for your reply that also pushed life back into this thread. Seems that the interest of running Mastodon instances is generally high. I was not able to perform any further testing yet due to high load at work but will check the gateaway and/or proxy solutions mentioned in the other replies.


We sat on it all afternoon yesterday and nothing. I have to admit that if I were to set something like this, without a manual, no chance of setting something like this up myself at all. I’m just an average server user, my specialty is 3D graphics and game development, but so I’m trying to help him with my experience.

So, I’m glad he made time for me again yesterday, and we tried something out.

Last time we said that the first option of direct linking doesn’t work natively. So, we tried the second part. We set up the uplink and managed to make a bridge, it worked properly, although somehow we didn’t understand why the access log said it wasn’t right for something when I was giving it all the permissions when generating the data, but then again that’s probably not that important because the uplink worked anyway. The problem was to combine this together in any way. Whatever data we tried to slip Mastodon, nothing. With http, https, private, public, everything we could think of to send it to local and then send it over the uplink, but just no. The uplink from our server to StorJ works, but that’s it, we can’t get any further. Tried maybe 4 different ways to somehow slip it in, but absolutely every time we encountered an issue. In all cases, it was the error message I mentioned here last time. We also played around with the domain, I set the bucket as a static website, we connected to the bucket again, but that was the end of it. We couldn’t manage to combine the two together again. I’m posting below the setup we worked with… We are really running out of options. I’m starting to slightly doubt that we’ll even be able to do this until someone does a tutorial because he himself told me he doesn’t know what else he could do differently.

oot@ns102719:~# uplink share --dns gateway.darksheep.social sj://mastodon --not-after=none
Sharing access to satellite 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@eu1.storj.io:7777
=========== ACCESS RESTRICTIONS ==========================================================
Download  : Allowed
Upload    : Disallowed
Lists     : Allowed
Deletes   : Disallowed
NotBefore : No restriction
NotAfter  : No restriction
Paths     : sj://mastodon/ (entire bucket)
Access    : code here
========== CREDENTIALS ===================================================================
Access Key ID: code here
Secret Key   : code here
Endpoint     : https://gateway.storjshare.io/
Public Access:  true
=========== DNS INFO =====================================================================
Remember to update the $ORIGIN with your domain name. You may also change the $TTL.
$ORIGIN example.com.
$TTL    3600
gateway.darksheep.social        IN      CNAME   link.storjshare.io.
txt-gateway.darksheep.social    IN      TXT     storj-root:mastodon
txt-gateway.darksheep.social    IN      TXT     storj-access:juqca3y2swq5scxd2ns4a4ydwdwa

Our test transfer was working. But only without Mastodon.


1 Like

Did you specify the S3_ENDPOINT?


Perhaps S3_HOSTNAME should be


See also the lab with Pixelfed (for mastodon): How to set up Pixelfed on Ubuntu 22.04 connected to Storj DCS

1 Like

I think we tried that option as well, but when he has some time again, we’ll try again.

I watched the video, nice. Too bad they didn’t show it from Mastodon’s perspective as well, because it’s very similar. Whether it can be replicated on Mastodon, we’ll see. :pray:t2:

Anyway, if anyone manages to get it going meanwhile, I’d be really incredibly grateful if you’d share your experiences here on how you managed to sort it out. I believe there are many times more experienced programmers and administrators than me. Thank you.

but isnt ./uplink share --url --disallow-lists --not-after=none sj://bucketname creating a public accessable bucket?

Hey friends! I finally sat down today to figure out how to set up Mastodon with Storj. I tested the following instructions with Mastodon 4.0.2.

The summary is it works with the following config! It appears the missing trick from prior attempts is the S3_ALIAS_HOST field.


Okay, so the things you need here are:

  • A Storj bucket
  • Gateway credentials (access key and secret key)
  • Linksharing credentials for public access

Because of the linksharing credentials bit, the easiest way to generate all of the things you need is through our uplink CLI.

Assuming you have an uplink CLI set up (so that ls, mb, cp, etc work), the following should work for you:

To make a bucket, you can choose the bucketname, like mastodon, and do

uplink mb sj://mastodon

I’m going to keep calling it BUCKET for subsequent steps though.

To generate the GATEWAYKEY and GATEWAYSECRET, run

uplink share --readonly=false --register sj://BUCKET

This will make an access key and an access secret.

Finally, Storj doesn’t have the same sort of concept of public buckets that S3 has. We support public access, but it’s able to be more fine-grained than at the bucket level. So, we’re going to tell Mastodon about it with the S3_ALIAS_HOST setting, which seems to support path prefixes crammed in there as well.

To generate LINKSHARINGKEY you can do

uplink share --url --readonly --disallow-lists --not-after=none sj://BUCKET

You’ll get a URL, but the URL is not quite right. It will be of the form https://link.storjshare.io/s/LINKSHARINGKEY/BUCKET/, but the Mastodon S3_ALIAS_HOST should be link.storjshare.io/raw/LINKSHARINGKEY/BUCKET. Note the lack of https://, the swap of /s/ for /raw/, and the lack of trailing slash.

Once you have these things, you should be able to plop this configuration in to your Mastodon’s .env.production configuration and you should be all set to “boost” some media “toots”.


If you’re doing the rake mastodon:setup wizard, choosing Minio as your object storage provider and telling it you do want to access the uploaded files from your own domain should allow you to set the same settings in the setup wizard.

Of course, these instructions mean all your media will be served from link.storjshare.io, and maybe you don’t like that. You can always follow our instructions for sharing a bucket via DNS settings for your own domain name. If you do that, you’d replace S3_ALIAS_HOST with your domain name backed by Storj.

If you want a video walkthrough, while it’s not for Mastodon, the same problems are solved for Pixelfed in this Webinar: How to set up Pixelfed on Ubuntu 22.04 connected to Storj DCS - YouTube. Note that in the video, I stumbled over the gateway registration command, which should have been uplink share --readonly=false --register sj://pixelfed. We’re working to make that more intuitive.


As many flee Twitter currently looking for alternatives, this might be nice blog posting.


Thanks man, you’re a legend! I’ll have to check it out after work. :clap:t2: