Errors in Storj services

Hi, I’m experiencing multiple errors today in Storg.

  1. I couldn’t reach an existing file.
  2. When I logged into my Storj account, I couldn’t see any “folders” under my bucket.

Please check if this is a global issue (Or at least something that multiple people experiencing today) and if so fix it fast, I use you in my production…

Are you sure you are using the correct encryption phrase? To see your objects, you should use exactly the same encryption phrase used during upload. If you use a different encryption phrase, your objects cannot be decrypted and your buckets will look like they are empty.


Yes, 100% is all in the code, so it never changes (this isn’t my first rodeo :)), this is not an encryption-phrae issue

Just checked, there is no visual bugs have been discovered, I can see “folders” and their content if the encryption phrase is correct, otherwise it shows “objects locked” instead.
Could you please try to remove cookies for the satellite UI and try again?

I would also recommend to use tools like uplink, rclone, FileZilla or Cyberduck to access your objects instead of the browser.

I use the S3 to access the files, and since the first massage, I haven’t experienced another issue.

Did you happen to figure out, what was the issue?

Unfortunately no :frowning:
This happened in production so we re-upload the file (we had to make it work in a maximum 5m.)

I believe you still has that file there, but encrypted with a different encryption phrase.
It’s easy to check:

  1. Create an access grant, using your current encryption phrase
  2. Setup an uplink CLI
  3. Check the number of files encrypted with the current encryption phrase
uplink ls --recursive sj://my-bucket | wc -l
  1. Check the total number of files:
uplink ls --recursive --encrypted sj://my-bucket | wc -l

If you see a difference in numbers - there are files encrypted with a different encryption phrase.

HI, yesterday that happened again, this time on 2 different files, this is start to be alarming…
Is there any rate limiting that don’t based on the file full name? (any rate limiting in your side must be based only on the file name since it perfectly ok situation that a server will send you multiple requests for a different files at the same time)

The rate limit is happening on the server side.
The filename is not known to the backend, it’s encrypted too, include all prefixes. It become available on the client side, when you provided an encryption phrase to decrypt them, thus filenames cannot be used to rate limit.

But there still can be this bug:

1 Like

Well, since you are a S3 storage and there are services that use you as the S3 solution for them it makes not since to have rate-limiting on reading different files. This is not abusing the service; this is a using the service.
We use your S3 backend using the AWS S3 SDK, so this is mean that in your server you can see easily the content of the request (since it’s no longer under HTTPS encryption).

If you don’t want to be used in production environment and only being used in a personal use case (like personal backup) with a lower request-rate it’s a different story but you need to be clear about it.

So, my question is, which one are you? We included you in our production as the S3 backend and as our usage is grow (we have more costumers) then we see more and more issue on getting the fragments from you, 2 weeks ago it was 1 Error on reading, 2 days ago it was already 2 Errors, each error equal higher latency to our customers and degraded quality of OUR service.

What errors are you seeing? The limits are documented here: Understanding Storj Usage Limits - Storj Docs. If you are hitting rate limits you can have them increased Requesting and Understanding Usage Limit Increases - Storj Docs.


If you see errors again please include the error message, request id, etc. or any more information that you have. You can always look at to see if we have any service disruptions as well.