Feedback regarding "Container Registry - Docker" page

I just received a mail shot that points to the page

The thing that is missing from this write up is any real justification for why you would do this rather than just use Docker Hub? I’m just starting to use Docker Hub as a public and private repository, so currently I am not aware of any limitations that would justify the use of STORJ, all I can see are the listed limitations of using STORJ. In the future, I may find the Docker Hub limitations, but by then it will be tightly integrated into our processes and so the likelihood of us moving will be low.

1 Like

Docker Hub recently introduced rate limits and raised prices. See e.g. this blog post.

Though, frankly, for this kind of use case I’d personally use something close to CI/CD or production servers. Storj’s egress might be expensive here.


As I’ve just started with docker the new pricing is my baseline. The issue is that you can have a docker account for $5 pm that offers 5,000 image pulls per day.

For any internal use, you can extend this number by using pull-through cache as the blog post you link details. For the publishing of any private content, you are likely to want access control which Docker seems to handle by providing “unlimited scoped tokens” for their $7 pm plan.

Storj must have had a reason to publish this write up - the problem is that that reason is just not clear. Currently, it reads more as - this is possible, but just not worth doing.

As for egress costs, very much yes. The first image I am having to deal with is about 800MB, I would not consider any solution that has egress costs if I was expecting 5,000+ downloads a day. The write up itself hints at a resolution for this by using a CDN service to get around the SSL cert issues, but if I ever get stuck with such a problem/opportunity I would first look at deploying a hosted internet-facing, secured pull-through cache.


Personally, I think the limitation of SSL (external CDN) is bigger. And even more, the complexity of the push: For dockerhub you can just push the docker images with docker push and with the mentioned approach you need to do some complex tricks to upload (also the layer sharing is more limited).

Therefore, I don’t think that the egress cost is the biggest problem, because it’s independent of the format: you can publish tar.gz files or container images, if it’s public AND/OR if you have a lot of downloads, it can be expensive.

One possible use case: archive old docker images from CI system which are not frequently used but may be required during support request to reproduce and older environment.


BTW, as far as I remember, dockerhub counts the downloads based on accessing the metadata json file, not the blobs. Which means that if you use kubernetes with “imagePullPolicy: Always” but your image is not changed (and cached on the nodes), you may exhaust your dockerhub limit, but with Storj you may pay only minimal egress for metadata downloads.


Just one more clarification: beyond all of this comment, I totally agree with the high-level conclusion of the OP: the published approach is very specific, and the usefulness depends on the actual use-case. This is not a dockerhub (or any other docker registry) replacement oob.