Storage node dashboard API (v0.19.0)

With v0.19.0 you can now query the storage node dashboard API. By default it is running on port 14002. You have to add -p to the docker command. From outside the container you should now be able to call the following API endpoints:

curl -s | jq .data.satellites

curl -s | jq .data.audit
  "totalCount": 24515,
  "successCount": 24271,
  "alpha": 19.99999999999995,
  "beta": 3.326213972280436e-56,
  "score": 1

curl -s | jq .data.uptime
  "totalCount": 158225,
  "successCount": 152157,
  "alpha": 99.9999999999992,
  "beta": 2.148811239339574e-18,
  "score": 1
  1. You might want to run the second and the third call for each satellite. In this example I did it only for one satellite.
  2. Instead of curl and jq you can also use a webbrowser and search for audit and uptime.
  3. Keep in mind this is only the dashboard API and not the storage node dashboard itself.

A few words on how to read the output:
The audit and uptime reputation is alpha / (alpha + beta). Alpha should be as high as possible and beta should be close to 0. In that case the result is 1 aka 100%. I believe that is the score but as long as I haven’t seen a storage node with a bad score I am not sure about that assumption.

If you are failing audits alpha will decrease and beta will increase. If you hit 0.6 you will get paused. We have a few bugs in place that we need to fix one by one. If you got paused for no reason please file a support ticket and we will double check that.

There are no penalites for failing uptime checks at the moment. We are working on a new service to track the uptime. You can read more about that here: Design draft: New way to measure SN uptimes

Highest alpha values you can get are 100 for audits and 20 for uptime.

I try to request some information and not get any data:

Inside container

Outside container

output is empty


I am seeing the same thing as @Odmin, no data returned when querying the api but netstat is showing that the container is listening on that port.

1 Like

I am having the same issue here, no data is returned

guerrerh@storj:~$ curl -s -v | jq .data.satellites
Connected to ( port 14002 (#0)
GET /api/dashboard HTTP/1.1
User-Agent: curl/7.58.0
Accept: /

Empty reply from server
Connection #0 to host left intact

1 Like

@littleskunk I found root cause of this issue, port forwarding is not working when this service is lissening on inside container, we should change it to listen on any or local ip

inside container:

I installed curl and jq inside container and test it again:

Could you please tell what parameter can be put to config.yaml for changing listening this service?

I set this in config.yaml

# server address of the api gateway and frontend app
console.address: :14002

restarted container and it works now.

It doesn’t look like it’s directly compatible with prometheus as is. It would be really nice if it was.


I don’t like that config change. We should solve that via the docker file only.

This problem is caused by docker and one day we will support running the storage node without docker. In that case the port will be open for everyone. If we solve the issue with the docker file instead of the config file we can make sure nobody is opening the port to the outside world.

Now I missed to say thank you for the feedback :smiley:


I see what you mean. Still there may be cases where one would want to expose this port to external interface to pull metrics remotely. Access may be restricted to specific ip/range via the firewall then. Does this port expose any sensitive data/controls?

While we’re on it, are there any sensitive items with Storj at all other then wallet key? I can see people posting their node ids and wallets here so I guess they are safe? What’s the risk of leaking identity files?

Since I run multiple nodes and you have to query multiple sattelites I wrote a script (quick and dirty, I don’t know much about bash programming) to query all that. You have to provide the port it should query:

./ 14002

The script:

#!/usr/bin/env bash
readarray -t sats < <( curl -s$1/api/dashboard | jq .data.satellites | jq -r '.[]')
declare -p sats
#echo $sats
for n in "${sats[@]}" 
        echo "$n"
	echo "  audit:"
        curl -s$1/api/satellite/$n | jq .data.audit
	echo "  uptime:"
        curl -s$1/api/satellite/$n | jq .data.uptime
	echo "----------------------"

Port should be open with v0.19.5. Please try again.

Confirm, after update to v0.19.5 port opened by default and no need add specific settings to config.yaml

1 Like

@kevink I have written a similar script. However, it’s written to run without needing to expose the dashboard port outside of the container, by using docker exec to run wget inside of the container (curl doesn’t appear to be available in the container).


api() {
  docker exec storagenode wget -qO - http://localhost:14002/api/$1

for sat in $(api dashboard | jq -r .data.satellites[]); do
  api satellite/$sat | jq '{id:, audit: .data.audit, uptime: .data.uptime}'

@cdhowie the images are created from Alpine Linux so they are very minimal.

They don’t have wget, curl nor bash, for example.

I just tried it and have to confirm: there is no wget available in the container.
Apart from that the port is exposed by default anyway (at least if you specify the -p option in the docker run command) and will always be exposed when the SNOboard is completely implemented so I wouldn’t worry about that.

The other reason why my script requires to specify a port is that I run multiple nodes and then you have to have different ports for the API.

You just said “it’s exposed when you expose it” :slight_smile:

It’s only exposed to all IP’s BECAUSE it’s running in a docker container for now, which limits that exposure. The advise is also to add the port mapping as -p to limit exposure to local host. I’m almost certain exposure will be limited to local host by default when they switch to binaries.

What’s your point? The script is designed to run on localhost not across the network, it connects to localhost.
And exposing the port using the parameter -p in the run command is the specified way of STORJ.

I thought you were talking about it being exposed beyond local host, but I now realize you never were. Please ignore my previous statements.

That’s very odd. docker exec storagenode wget definitely works for me, and I haven’t customized the docker image at all…

1 Like