SignatureDoesNotMatch with S3 JS API calling Storj S3

I generate S3 credentials from the CLI. The access works fine, but for JS libraries minio S3, AWS-S3 v2 and AWS-S3 v3 the clients do validation for the v4 signature endpoints. I get the error below.

Based on various Githubs, it looks like this is a minio config setting on the gateway side.

The error spit out is the following:

SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

import {S3} from '@aws-sdk/client-s3';

const s3 = new S3({
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
      region: 'us-west-2',
      endpoint: {
        hostname: 'gateway.us1.storjshare.io',
        protocol: 'https',
        path: '/',
      },
      //   s3ForcePathStyle: true,
      //   signatureVersion: 'v4',
    });

I can’t move off S3 for use cases specific issues.

We are sorry you are having these issues with the minio endpoint. We have escalated the issue to our dev team who are currently in the process of trying to replicate it and will post a reply as soon as possible.

1 Like

I’ve also now tried the latest S3Client to the same error.

import {S3Client, PutObjectCommand} from '@aws-sdk/client-s3';
import * as Base64 from 'base64-arraybuffer';

const s3Client = new S3Client({
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
      region: 'us-west-2',
      endpoint: {
        hostname: 'gateway.us1.storjshare.io',
        protocol: 'https',
        path: '/',
      },
    });

const base64 = await RNFS.readFile(fileuri, 'base64');

      const arrayBuffer = Base64.decode(base64);

      const config = {
        Bucket: bucketName,
        Key: `${keyFolder}/${Key}`,
        Body: arrayBuffer,
        ContentType,
      };

await s3Client.send(new PutObjectCommand(config));

The primary reason I could think this might not work is invalid credentials. Here’s an example with AWS SDK v2 for JavaScript with test credentials set up: Edit fiddle - JSFiddle - Code Playground. One of the ways to troubleshoot this might be to double-check credentials passed to the constructor and/or try them with that example (you can create a test bucket and a test file or use existing ones).

The AWS CLI work as expected for my credentials.

This was the command I used to generate the creds
uplink share --readonly=false --not-after +1h --register sj://${process.env.STORJ_BUCKET}/${prefix}/ --auth-service=https://auth.us1.storjshare.io

1 Like

I also tried using a string endpoint like in your example, but that didn’t work either.

endpoint: ‘https://gateway.us1.storjshare.io/’,

Hi @awcchungster, could you try changing the signature version of the s3 client to v2 and see if that works?

const s3Client = new S3Client({
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
      region: 'us-west-2',
      signatureVersion: 'v2',
      endpoint: {
        hostname: 'gateway.us1.storjshare.io',
        protocol: 'https',
        path: '/',
      },
    });

The Javascript SDK v3 uses signature version 4 by default but aws CLI uses signature v2.

1 Like

I added this on both S3 and S3client variations, but it didn’t work. It looks like Javascript APIs only support v4 because v2 is deprecated at AWS.

I also downgraded to AWS-sdk which is the v2 API and supports signatureVersion: v2. I added that field in and it still DID not work. This is super frustrating.

import S3 from 'aws-sdk/clients/s3';
    const s3 = new S3({
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
      region: 'us-west-2',
      // endpoint: {
      //   hostname: 'gateway.us1.storjshare.io',
      //   protocol: 'https',
      //   path: '/',
      // },
      endpoint: 'https://gateway.us1.storjshare.io/',
      //   s3ForcePathStyle: true,
      signatureVersion: 'v2',
    });

Hi @awcchungster,

Perhaps you could try adding some logging to see if requests are being made, and it may shed some light on the problem. I used the example of adding a middleware from this post and was able to get a formatted request object logged to the console: AWS JavaScript SDK v3 - usage, problems, testing - Better Dev

Here’s an example of it running with NodeJS which uploads a test file to a bucket and logs the request: s3_put_object_storj.js · GitHub. I installed the dependencies using npm install @aws-sdk/client-s3.

Hope this helps!

Sean

Thanks for the code. I really appreciate the thoroughness. I didn’t see anything obvious as to what was wrong. Any thoughts?

Sending request from AWS SDK {"request": {"body": [], "headers": {"amz-sdk-invocation-id": "35e28063-6780-413f-ac18-313332e541de", "amz-sdk-request": "attempt=2; max=3", "authorization": "AWS4-HMAC-SHA256 Credential=ju5kumcdqsj3ljmp4g3mp4t2ffja/20211221/us-west-2/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;content-disposition;content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-user-agent, Signature=59a5ba6f457dd3d151ede278f816b377cd7d4a435e5468e0421efe3590a363f9", "content-disposition": "inline;filename=\"stitched.png\"", "content-length": "5319132", "content-type": "image/png", "host": "magellantest.gateway.us1.storjshare.io", "user-agent": "aws-sdk-js/3.44.0 os/other lang/js md/rn api/s3/3.44.0", "x-amz-content-sha256": "81e190c99ff552cc9ffcf360b32df453e48af6f64933c79eb1d5a37550da7049", "x-amz-date": "20211221T222228Z", "x-amz-user-agent": "aws-sdk-js/3.44.0"}, "hostname": "magellantest.gateway.us1.storjshare.io", "method": "PUT", "path": "/7702ec60-62ac-11ec-be04-7d227eabec45/stitched.png", "port": undefined, "protocol": "https:", "query": {"x-id": "PutObject"}}}

The request appears to look correct, glancing at the auth headers. I noted that the body in the request log shows as []. Perhaps there’s something with the body causing requests to have an invalid signature. Perhaps you could try setting the Body field in PutObjectCommand params to something like “test” to see if that works?

I wonder if other requests like ListObjectsCommand works for you, or do you still get invalid signature errors for those too?

I appreciate your follow up going later in to the day. This error seems Storj S3 related.

The body is always empty for my requests. While I’m not sure why, I think it has something to do with my file types (images, blobs, etc)/

Going back to the Storj Uplink CLI, when I removed the expiration criteria, the upload error went away.

uplink share --readonly=false **--not-after +1h** --register sj://${process.env.STORJ_BUCKET}/${prefix}/ --auth-service=https://auth.us1.storjshare.io

I took the keys generated here, passed it to my same S3 uploading code, and viola, all my uploads worked as expected. However, I need the time bound because I need keys to expire.

Could you try replicating my scenario with code to help me debug this please? The error is likely Storj related, or will be a very specific S3 config change that an average developer like myself probably won’t find.

Thanks for the quick follow up! Let me see if I can replicate this using the same way you’ve configured credentials.

I tried to replicate your code as close as possible, although I’m not sure how to replicate the RNFS.readFile() part. Perhaps you could share some more code so that I could?

I tried a restricted credentials as well and it seemed to work. I had a random thought that perhaps the clock on your computer or server may have drifted which could cause these signature problems. Could you try generating restricted credentials with a longer time, such as +24h or +48h to see if we could rule that out as an issue?

Updated code is here to try and replicate, including uplink share command used: s3_put_object_storj.js · GitHub

After a lot of bug investigation, this was an issue with my own code. The expiration wasn’t related to this.

Long story short, the Storj NodeJS binding would have been my ideal way to issue credentials. However, there wasn’t support for S3 at the time, and installing through yarn/npm was flaky. I had to write my own wrapper around the CLI and there was a tiny bug there causing these issues.

Thank you to the Storj team for helping out. Their response times were pretty helpful.

6 Likes