Gateway MT beta: looking for testers

As you might have seen in a couple of our product/development updates, we’ve been working on a new S3 gateway. The biggest difference between this new gateway and our existing gateway is that the new gateway is multi-tenant, and Storj Labs will be hosting a few of them. This means you’ll be able to integrate with the Storj network via HTTP, and you won’t have to run anything extra on your end.

One thing to keep in mind is that the Gateway MT is still in BETA, so DO NOT run anything mission-critical on it.

How Can You Help?

We need your help to test the Gateway MT. We’ve done a good deal of internal testing and could use some fresh eyes and different use cases to help us uncover bugs to figure out where we need to improve. Take a look at the Gateway MT GitHub repo If you want to learn more!

We’re also looking for beta testers on a new Satellite in a closed beta called US2. This Satellite has many improvements, including multipart upload support—we’ll share more about these improvements at a later date. If you’re interested in helping us test the new functionality, please sign up at

Where do you report issues?

The best place to report issues while testing the Gateway MT is this forum topic. Our Gateway MT engineering team will be checking it and asking questions/ trying to resolve any uncovered problems. Depending on what we have going on internally we may not be able to respond immediately.

Known Issues with the Gateway MT:

  • It doesn’t support multipart upload yet
  • It doesn’t support end-to-end encryption yet
  • There are some performance issues

Signup link :


This new Beta feature was only released on Europe-West-1 today!


This is great news!

Will this gateway support virtual-hosted-style urls?

1 Like

Yes! It supports both virtual-hosted–style and path-style URLs.


Yes it does already.


We added a quick guide on how too start using the Gateway MT and AWS CLI on our documentation site:

1 Like

Hey Brandon,

Ive just tried creating a new access grant as per the instructions and it doesn’t seem to give me access to create any buckets? Any idea?

1 Like

Hi @will.topping! Thanks for testing it out. Does MSP360 have any logs? You should be able to create buckets if you didn’t restrict the access grant during creation.

1 Like

This is certainly awesome! It opens up a LOT more options for software that supports S3.

I was able to set up both Synology Hyper Backup as well as Synology Cloud Sync using this gateway!
I will test this for a while as an additional backup first until this is out of beta at least. But it looks like Tardigrade is going to be replacing my traditional backup setup.

I did notice that Synology doesn’t let you set part size for multipart manually but only offers several options in a drop down list. For cloud sync the maximum part size is 128MB, which right now means that any file sizes over that will simply stop uploading and hang the sync process. For hyper backup you seem to be able to set it up to 512MB, but I think it actually creates small file chunks itself anyway, so I think it won’t be an issue there.


It seems to let me create a folder within one of my existing buckets but not a new bucket? There was an issue similar to this a couple of weeks ago and someone said it was a bug and was fixed? Just checking not related to that? That presented if you tried to create a new access grant and already had buckets and access grants in place.

This bug was fixed but we still have a problem dealing with browser cache. Please reload the page with a hard refresh and generate a new access grant.

Thanks @littleskunk I couldn’t remember who said about it! Im using a brand new Mac since last Friday so this shouldn’t be the issue? Ill try creating a new access grant though…

Can I get some clarification on exactly what the server side encryption means in simple security terms? Does this mean that Storj will have access to my keys and could potentially view my files?

I appreciate you wouldn’t do that, but I just wanted to understand what the differences are in using the MT Gateway.

Looks like one of my longer backup tasks failed. Unfortunately in the Synology UI, they seem to think this is sufficient logging…

That obviously not being helpful, I searched a bit for more detailed logging and found the following errors in /var/log/messages

2021-01-26T22:40:22+01:00 DISKSTATION img_worker: (23667) [err] error_mapping.cpp:33 send_file:424: failed, {"aws_error_code":"ClientDisconnected","aws_error_type":"client","error_class":"Aws\\S3\\Exception\\S3Exception","error_message":"Client disconnected before response was ready","http_status_code":499,"success":false}

2021-01-26T22:40:25+01:00 DISKSTATION img_worker: (23667) [err] error_mapping.cpp:33 removeObject:989: failed, {"aws_error_code":"PhpInternalXmlParseError","aws_error_type":"client","error_class":"Aws\\S3\\Exception\\S3Exception","error_message":"A non-XML response was received","http_status_code":403,"success":false}
I cut out a lot of stuff that was synology specific.

Just trying to rerun now as this feels like it’s not something that would necessarily pop up again. Ands since I’m using incremental backups it should just add whatever is missing from the previous run (Edit: apparently it does not do this when the initial run fails, it’s starting over). I’ll keep you posted

1 Like

Storj will have access to your keys, but only while they are being used to process active requests. At rest the key is encrypted with your access key id and we don’t store the access key id (only a hash of it). During request processing we get your access key id, lookup the right encrypted access grant, decrypt it with the access key id, and then use the access grant. Additional improvements to the in memory caching of the keys is also planned to further limit the exposure. Currently the keys will remain cached in memory after they are first used.


Can you try creating a bucket using the AWS CLI with that access grant? I’m curious if this is an MSP360 specific issue or not. We have implemented a limited set of features so it may be that something MSP360 is expecting to be implemented isn’t.

In terms of security the gateway-mt might copy your access grant which would give it full access to all of your files. We are working on a client side solution so that the gateway-mt might still delete your files but it wouldn’t be able to decrypt them.

We have implemented it in a way that we don’t have your keys but from a security perspective, you have no real good way to verify that. So you should always assume we are able to read it. Anyway, the current solution will store your access grant encrypted inside the gateway-mt. It will only decrypt and use your access grant when you interact with the gateway-mt. The s3 access key is used for that. So lets say the gateway-mt gets compromised. All you have to do is throw away your s3 access key. Without that even a compromised gateway-mt wouldn’t be able to decrypt the stored access grant.

Sorry it is getting late over here. I don’t have the feeling I am explaining it well enough. In short, the gateway-mt only decrypts your access grant while you need it and otherwise always stores it encrypted.


Quick update, as expected the second backup attempt completed successfully! I’m gonna chalk that first error up to a fluke. Just testing an incremental run now, but I’m sure that will be fine too.

Nice job on this gateway!

Quick question, is multipart upload planned to be implemented on all satellites?

1 Like

Yes. We started with a new satellite first in order to test multipart upload without having to worry about migration. We are currently working on a migration for all the other satellites. In the meantime you can test multipart upload on us2 if you like.