Prometheus Storj-Exporter

Well on PC i’m using short startup configuration, with sandard ports. On RPi, i have non standard ports.

docker run -d --restart=unless-stopped --link=storagenode2 --name=storj-exporter_rpi -p 9652:9651 -e STORJ_HOST_ADDRESS="USE IP ADRESS_RPI" -e STORJ_API_PORT="YOUR NON STANDARD PORT RPI" anclrii/storj-exporter:latest

If you have continer “storagenode2” on you PC its OK if you dont have change “storagenode2” to “you name of storagenode container on PC” or ty to delete "–link=storagenode2 " from command

And you need to open FW port to “YOUR NON STANDARD PORT RPI” on >>>RPI<<< important

and next on prometheus must set port 9652 in next target

I have this config on prometheus
scrape_configs:

  • job_name: storj_kledvina
    honor_timestamps: true
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
    • targets:
      • 10.0.0.10:9651
  • job_name: storj_kledvina_wd1
    honor_timestamps: true
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
    • targets:
      • 10.0.0.10:9652
  • job_name: storj_kledvina_ntb1
    honor_timestamps: true
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
    • targets:
      • 10.0.0.10:9653
  • job_name: storj_kledvina_ntb2
    honor_timestamps: true
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
    • targets:
      • 10.0.0.10:9654
  • job_name: storj_kledvina_azure1
    honor_timestamps: true
    scrape_interval: 10s
    scrape_timeout: 10s
    metrics_path: /metrics
    scheme: http
    static_configs:
    • targets:
      • 10.0.0.10:9655

what do you mean by that?
I’m using on the same RPi: zabbix, vpn, storj, ssh and all of them using different ports. But i never ever did any changes regarding these ports on RPi. I only made changes on the router (port forward).

OK if you dont have Firewall on RPI its OK and dont need to open port

I run this:

docker run -d --restart=unless-stopped --link=storagenode2 --name=storj-exporter2 -p 9652:9651 -e STORJ_HOST_ADDRESS=“192.168.0.14” -e STORJ_API_PORT=“14003” anclrii/storj-exporter:latest

And still get the same error :expressionless:

Just run it without -d and --restart to better see errors in the console.

What command did you use to start the node?

Thank you, same story:

“standard_init_linux.go:211: exec user process caused “exec format error””

This is likely due to arm architecture on rpi, I never built arm docker image, might look into it. For now you could try running standalone exporter script/service instead of docker container, see readme on github repo for details.

Mmmm now that we have uptime monitoring on dashboard could be use here too? I don’t know what variable need this dashboard

For those of you who are using this, does the workflow go like something along the following:
Run Storj-Exporter in docker
Run Prometheus in docker
Run Grafana in docker with Storj-Exporter-dashboard

Do you need to do any special configuration of Prometheus to link into the exporters?

I’m plodding along through this, but am fairly new to containers and such.

looks good.
my exporter run command:
docker run -d --restart=unless-stopped --link=storagenode --name=storj-exporter -p 9651:9651 -e STORJ_HOST_ADDRESS=“storagenode” anclrii/storj-exporter:latest

Thanks for the assistance ya’ll:


For those who come later, here is how I did my (currently a bit janky, without docker-compose) setup:
storj_datanode01.sh:

#!/usr/bin/env bash
docker run -d --restart always --stop-timeout 300 \
    -p 28967:28967 \
    -p 14001:14002 \
    -p 127.0.0.1:6001:5999 \
    -e WALLET="<address_removed>" \
    -e EMAIL="<email_removed>" \
    -e ADDRESS="<url_removed>:28967" \
    -e STORAGE="1.6TB" \
    --mount type=bind,source="/srv/dev-disk-by-label-DataDrive01/StorjNode01/Identity",destination=/app/identity \
    --mount type=bind,source="/srv/dev-disk-by-label-DataDrive01/StorjNode01/Data",destination=/app/config \
    --name StorjNode-01 storjlabs/storagenode:latest \
    --debug.addr=":5999" \
    #--log.level error

storj_datanode02.sh:

#!/usr/bin/env bash
docker run -d --restart always --stop-timeout 300 \
    -p 28968:28967 \
    -p 14002:14002 \
    -p 127.0.0.1:6002:5999 \
    -e WALLET="<address_removed>" \
    -e EMAIL="<email_removed>" \
    -e ADDRESS="<url_removed>:28968" \
    -e STORAGE="1.6TB" \
    --mount type=bind,source="/srv/dev-disk-by-label-DataDrive02/StorjNode02/Identity",destination=/app/identity \
    --mount type=bind,source="/srv/dev-disk-by-label-DataDrive02/StorjNode02/Data",destination=/app/config \
    --name StorjNode-02 storjlabs/storagenode:latest \
    --debug.addr=":5999" \
    #--log.level error

storj-exporter.sh:

#!/usr/bin/env bash
# Requires https://github.com/anclrii/Storj-Exporter to be built locally for my Raspberry Pi
docker run -d --restart=unless-stopped --link=StorjNode-01 --name=storj-exporter01 -p 9651:9651 -e STORJ_HOST_ADDRESS=pi-nas.lan -e STORJ_API_PORT=14001 storj-exporter:latest
docker run -d --restart=unless-stopped --link=StorjNode-02 --name=storj-exporter02 -p 9652:9651 -e STORJ_HOST_ADDRESS=pi-nas.lan -e STORJ_API_PORT=14002 storj-exporter:latest

prometheus.yml (Shamelessly taken from Cross91):

# Shamelessly taken from Cross91
# Global config
global:
  scrape_interval: 15s     # Set the scrape interval to every 15s. Default is 1 minute.
  evaluation_interval: 15s # Evaluate the rules every 15s. Default is 1 minute.
  #scrape_timeout: is set to global default (10s)

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
    # - alertmanager: 9093

# Load rules once and periodically evaulate them according to the global 'evaluation_interval'.
rule_files:
  # - first_rules.yml
  # - second_rules.yml

# A scrape configuration containing exactly one endpoint to scrape:
# Here's its Prometheus self
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    #
    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'storjnode01'
    static_configs:
    - targets: ['pi-nas:9651']

  - job_name: 'storjnode02'
    static_configs:
    - targets: ['pi-nas:9652']

prometheus.sh:

#!/usr/bin/env bash
docker run \
	-p 9100:9090 \
	--name=prometheus \
	-v /home/pi/StorJ/prometheus.yml:/etc/prometheus/prometheus.yml \
	prom/prometheus

grafana.sh:

#!/usr/bin/env bash
docker run -d \
	-p 3100:3000 \
	--name=grafana \
	-e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource,yesoreyeram-boomtable-panel" \
	grafana/grafana

I’m not sure of a nice way to record how to add the Storj-Exporter-dashboard *.json files to the grafana.sh script. So, for now, I just copy pasted each of the .json files into the import section of the dashboard and connected grafana to prometheus by adding a data source into grafana. The data source is of type prometheus and set to default as true, pointing towards: http://pi-nas.lan:9100.

2 Likes

not bad.
Your prometheus.sh doesn’t mount any containers or path for the database so I guess once you recreate/update that container, all data will be lost.
Same thing for grafana.

I have a directory for prometheus database and all grafana files.

prometheus:

sudo docker run -d -p 9090:9090 --restart unless-stopped --user 1000:1000 --name prometheus \
	-v /sharedfolders/config/prometheus.yml:/etc/prometheus/prometheus.yml \
	-v /sharedfolders/prometheus:/prometheus \
	prom/prometheus --storage.tsdb.retention.time=360d --storage.tsdb.retention.size=30GB \
	--config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus

grafana:

docker run -d \
-p 3000:3000 \
--name=grafana \
--restart=unless-stopped \
--user=1000 \
-v /sharedfolders/grafana:/var/lib/grafana \
-e "GF_INSTALL_PLUGINS=grafana-clock-panel,grafana-simple-json-datasource" \
grafana/grafana

This is my adapted dashboard that also fixes the payouts to distinguish between repair traffic and normal egress, not sure this json works easily when imported, I think it somehow loses all variables when I export it but maybe someone wants to give it a try: https://pastebin.com/QS51u9Sd
If I/we can work out how to get my dashboard to the format needed to easily import it (like the original dashboard) then I will make a PR or post it on my fork.

1 Like

Glad you got it working, I recently found grafana now offers a free cloud account with up to 1 user and 5 dashboard and was pretty easy to set it up with this exporter/dashboards: https://grafana.com/signup/starter/connect-account

I’m just working on adding the new metrics for what’s been added recently in storagenode api. I decided to completely rewrite some bit’s of the code too to hopefully simplify it so will take some time to release a new version, but it’s coming.

3 Likes

excellence takes time :slight_smile: Thanks.

Good point on the databases; especially with grafana, I’m fine with loosing the history, but I’d have to re add the graph settings/json files. Definitely want to fix that.

I’ve incorporated the uptime monitoring (i.e. onlineScore metric) in my fork of Storj-Exporter and Storj-Exporter-dashboard. I’m just calling it “Online” since strictly speaking it’s not exactly uptime but the online score as calculated by satellites. Screenshot of the boom table in my Storj-Exporter-Boom-Table-Alt dashboard is below. I’ve also changed axes titles and made some slight modifications to different series in the same commit. Also, for better or worse, I’m using Grafana 7.2.2 on my system so plugin versions required a bit newer than the baseline anclrii/Storj-Exporter-dashboard repo.

https://github.com/fmoledina/Storj-Exporter
https://github.com/fmoledina/Storj-Exporter-dashboard

Minimal docker-compose and prometheus.yml files snippets to get started:

storj-exporter:
    container_name: storj-exporter
    build:
      context: https://github.com/fmoledina/Storj-Exporter.git
    environment:
      - STORJ_HOST_ADDRESS=storagenode # replace with your Docker container hostname or FQDN
      - STORJ_API_PORT=14002 # replace with your API port
    restart: always
scrape_configs:
...
  - job_name: 'storj-exporter'
    static_configs:
      - targets: ['storj1-exporter:9651']
        labels:
          instance: 'storj01'
      - targets: ['storj2-exporter:9651']
        labels:
          instance: 'storj02'
      ...

Let me know if there are any issues getting this up and running on your system.

3 Likes

I released v1.0.1 today. You can use wotchtower to update adhoc:

docker run --rm --name watchtower-run-once -v /var/run/docker.sock:/var/run/docker.sock storjlabs/watchtower storj-exporter --stop-timeout 60s  --run-once

This update includes new metrics recently added in storagenode api, payout data, node info etc.
Also added a new environment variable that allows choosing what collectors to run, defaults to collect all:

-e STORJ_COLLECTORS="payout sat"

This mainly allows disabling collection of detailed satellite metrics (sat) that are most expensive. Could be useful on smaller systems, but will have sat data missing in current grafana dashboards. I might create a new one without sat details.

4 Likes