With v0.19.0 you can now query the storage node dashboard API. By default it is running on port 14002. You have to add -p 127.0.0.1:14002:14002 to the docker command. From outside the container you should now be able to call the following API endpoints:
A few words on how to read the output:
The audit and uptime reputation is alpha / (alpha + beta). Alpha should be as high as possible and beta should be close to 0. In that case the result is 1 aka 100%. I believe that is the score but as long as I havenât seen a storage node with a bad score I am not sure about that assumption.
If you are failing audits alpha will decrease and beta will increase. If you hit 0.6 you will get paused. We have a few bugs in place that we need to fix one by one. If you got paused for no reason please file a support ticket and we will double check that.
There are no penalites for failing uptime checks at the moment. We are working on a new service to track the uptime. You can read more about that here: Design draft: New way to measure SN uptimes
Highest alpha values you can get are 100 for audits and 20 for uptime.
@littleskunk I found root cause of this issue, port forwarding is not working when this service is lissening on 127.0.0.1 inside container, we should change it to listen on any or local ip
I donât like that config change. We should solve that via the docker file only.
This problem is caused by docker and one day we will support running the storage node without docker. In that case the port will be open for everyone. If we solve the issue with the docker file instead of the config file we can make sure nobody is opening the port to the outside world.
I see what you mean. Still there may be cases where one would want to expose this port to external interface to pull metrics remotely. Access may be restricted to specific ip/range via the firewall then. Does this port expose any sensitive data/controls?
While weâre on it, are there any sensitive items with Storj at all other then wallet key? I can see people posting their node ids and wallets here so I guess they are safe? Whatâs the risk of leaking identity files?
Since I run multiple nodes and you have to query multiple sattelites I wrote a script (quick and dirty, I donât know much about bash programming) to query all that. You have to provide the port it should query:
@kevink I have written a similar script. However, itâs written to run without needing to expose the dashboard port outside of the container, by using docker exec to run wget inside of the container (curl doesnât appear to be available in the container).
#!/bin/bash
api() {
docker exec storagenode wget -qO - http://localhost:14002/api/$1
}
for sat in $(api dashboard | jq -r .data.satellites[]); do
api satellite/$sat | jq '{id: .data.id, audit: .data.audit, uptime: .data.uptime}'
done
I just tried it and have to confirm: there is no wget available in the container.
Apart from that the port is exposed by default anyway (at least if you specify the -p option in the docker run command) and will always be exposed when the SNOboard is completely implemented so I wouldnât worry about that.
The other reason why my script requires to specify a port is that I run multiple nodes and then you have to have different ports for the API.
You just said âitâs exposed when you expose itâ
Itâs only exposed to all IPâs BECAUSE itâs running in a docker container for now, which limits that exposure. The advise is also to add the port mapping as -p 127.0.0.1:14002:14002 to limit exposure to local host. Iâm almost certain exposure will be limited to local host by default when they switch to binaries.
Whatâs your point? The script is designed to run on localhost not across the network, it connects to localhost.
And exposing the port using the parameter -p in the run command is the specified way of STORJ.