FYI, “slow down” error codes are different on Storj (429) vs AWS S3 (503).
Sounds like a bug in AWS S3.
I agree, this seems like a strange status for rate limiting. 503 Service Unavailable - HTTP | MDN even mentions a 429 is more appropriate if a client is being rate limited.
Have you encountered any clients having problems with this? We do aim to be as S3 compatible as possible, but I’m not convinced we should change this.
HashBackup uses Amazon’s Boto 2 library, which has retry logic, and since HashBackup does exponential backoff on top of this, HB limits Boto’s retries to 2.
For 503 errors, the Boto 2 library does retries. 429 errors aren’t handled at all from what I can tell, and while HashBackup will handle them with backoff and retry, it will be quite visible to users because this is the kind of error Boto normally handles silently.
I’ll admit Boto 2 is no longer supported by Amazon and is replaced by Boto 3. The issue for HashBackup is that Boto 3 generates Python interface code at run-time based on a bunch of JSON data it reads at run-time from the Boto 3 package data. This makes it difficult to generate a static build, so I haven’t switched to Boto 3.
Here’s the link to AWS S3 error codes, showing that it does return 503:
This article explains that AWS returns 503’s when it is re-partitioning a bucket to provide faster access.
Storj seems to be returning 429 because it doesn’t “like” the client’s request rate. When it failed with a 429, HashBackup had made about 10 small sequential requests: 3 uploads, 3 downloads, 3 deletes, all 1K, then a 4K upload. Seagate Lyve worked fine.
10 small sequential requests doesn’t seem like a reasonable time to start throttling IMO. It’s not like I was blasting it with 100 concurrent requests.
The rate limits are listed here: Understanding Storj Usage Limits - Storj Docs. Maybe you hit the 1 write per second to the same object name limit
? 10 requests wouldn’t be enough to hit the 100 request per second rate limit
Thanks, I didn’t realize that. In actual use it’s not an issue - only a problem for running the performance test where it uploads and downloads the same filename. I changed it to add an incrementing suffix and it works, though Storj is going to see the same load, so I’m not sure how this limit per object name is supposed to be an effective load control.