You may specify your Gateway-ST credentials in its config (or just copy a config from the first instance), these S3 creds would be used to access your Gateway-ST. You may also generate your own almost random creds and specify them as S3 creds in the config of the next instance.
However, I would assume, that you also want to use the same internal hostname to access any of them to distribute the load, in that case using the same creds would be more convenient.
Parameters are:
# Minio Access Key to use
minio.access-key: jrfjfitigt
# Minio Secret Key to use
minio.secret-key: jffjfjfjf
Since you mention “Deploy”, I would assume that you would like to use a containerized gateway. The image is storjlabs/gateway and you may deploy it in a scalable manner in docker-compose, docker swarm, kubernetes, nomad.
The only thing what you need to do - is to mount the same config as a file bind to the container. The method depends on an underlaying scheduler.
Yes, add STORJ_ as a prefix, convert all to the uppercase and replace all dots and dashes to underscores, i.e. STORJ_MINIO_ACCESS_KEY for minio.access-key, etc.
You may also use these parameters as a command line arguments after the image name, like --minio.access-key=.
Because by default it supports only http.
To support HTTPS you need to setup the reverse-proxy before it with SSL support.
Also, the Gateway-ST listen on 127.0.0.1 by default, so if you want to contact it by the network IP, you need to change the config (or provide a command line option --server.address) to listen on the network interface. If you do not know the local IP of this instance, you may use 0.0.0.0:7777 as a server address, it would bind it to all IPs available for the instance.
I understand that, but that is not the question. Let me clarify.
To ensure availability, fly provides mechanisms to perform healthchecks on the services. For an http service like gateway, you can provide a healthcheck path which must return 200 ok or the service is assumed to be broken.
The question is, does the gateway have a healthcheck path?
We can’t use / because it returns 403
> curl -v localhost:7777
* Host localhost:7777 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:7777...
* Connected to localhost (::1) port 7777
* using HTTP/1.x
> GET / HTTP/1.1
> Host: localhost:7777
> User-Agent: curl/8.11.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 403 Forbidden
< Accept-Ranges: bytes
< Content-Length: 226
< Content-Security-Policy: block-all-mixed-content
< Content-Type: application/xml
< Server: Storj
< Vary: Origin
< X-Amz-Request-Id: 18169659BD048134
< X-Xss-Protection: 1; mode=block
< Date: Wed, 01 Jan 2025 14:00:18 GMT
<
<?xml version="1.0" encoding="UTF-8"?>
* Connection #0 to host localhost left intact
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>18169659BD048134</RequestId><HostId>88309807-75f2-4eb1-83ff-9b5fe248aecf</HostId></Error>
yes, I understand. You need a healthcheck for the schedulers like k8s or nomad.
I would suggest to use an HTTP healthcheck against the root /. It should return 200 (or, maybe, 401, if you didn’t resolve the permissions issue).
But not 404/403 or 502/503.
However, since it’s not 404/502/503 you may consider it as a success?
Or, you may run it with a --website option, then you will have a public access without any authentication.
The alternative is to provide an access token for your monitoring system (which is not OK, as I would assume…)