How to monitor all nodes in your LAN using prometheus + grafana [linux using docker]

Thats weird can you check the logs for prometheus

docker logs prometheus

thank you. strange:

sudo docker run -p 9090:9090 --restart unless-stopped --name prometheus -v /mnt/my5tb/prometheus.yml:/etc/prometheus/prometheus.yml -v /mnt/my5tb/prometheus:/prometheus prom/prometheus --storage.tsdb.retention.time=360d --storage.tsdb.retention.size=100GB --config.file=/mnt/my5tb/prometheus.yml --storage.tsdb.path=/mnt/my5tb/prometheus

level=error ts=2021-09-22T20:44:18.713Z caller=main.go:360 msg="Error loading config (--config.file=/mnt/my5tb/prometheus.yml)" err="open /mnt/my5tb/prometheus.yml: no such file or directory"

ls -al /mnt/my5tb/
total 16
drwxr-xr-x 3 root root 4096 Sep 22 22:03 .
drwxr-xr-x 5 root root 4096 Sep 17 10:37 ..
drwxrwxrwx 2 root root 4096 Sep 22 21:35 prometheus
-rw-r--r-- 1 root root 1252 Sep 22 22:03 prometheus.yml

the file exists and should be readable. where is the mistake?

Did you fill in the config with the example?

thank you! it was the last part of the run command, which was not correct:
–config.file=/mnt/my5tb/prometheus.yml --storage.tsdb.path=/mnt/my5tb/prometheus

should be --config.file

thx, seems to be a copy/paste issue into the forum.
meanwhile, Grafana works very well. love it.

1 Like

Is that relevant for us?

How are Prometheus and Grafana updated, when used with Docker?

You really shouldnt expose it to the public anyways, Locally it should be fine the way it is. If your in need to update it you will need to run the docker pull to update and rm and start the container just as you would for a storagenode.

3 Likes

@kevink I also have my first storage node Grafana dashboard inclusive email alerts that works without having to run a third-party docker container. However I am not as skilled as you are when it comes to designing a beautiful Grafana dashboard. How about we join forces and merge your dashboard with mine? I would also like to get the dashboard into our official docs.

3 Likes

Hi, I am trying to configure my panel to show the information of 3 nodes I have on my server and I think this is exactly what is failing me.

docker run -d --name=storj-exporter-1 -p 9651:9651 -e STORJ_HOST_ADDRESS=192.168.1.100 -e STORJ_API_PORT:14002  anclrii/storj-exporter:latest 
docker run -d --name=storj-exporter-2 -p 9652:9651 -e STORJ_HOST_ADDRESS=192.168.1.100 -e STORJ_API_PORT:14003  anclrii/storj-exporter:latest 
docker run -d --name=storj-exporter-3 -p 9653:9651 -e STORJ_HOST_ADDRESS=192.168.1.100 -e STORJ_API_PORT:14004  anclrii/storj-exporter:latest 

I see that the STORJ_API_PORT variable is defined but it seems that no matter what value is passed when creating the docker, it always takes as reference 14002.

Could it be updated to take this value into consideration?


EDIT: Nah, forget it, I don’t know how but I just got it to work without any problem.

I think it should be -e STORJ_API_PORT=14002

1 Like

Hello together,

I am also trying to set up this grafana dashboard. When I try to run the storj-exporters I’ll get the following logs screen with the following docker-command:

docker run -d --name=STORJ-1-Exporter -p 9651:9651 -e STORJ_HOST_ADDRESS=99.222.222.44 -e STORJ_API_PORT:14002 anclrii/storj-exporter:latest

Does anyone know how to proceed?

If the exporter is running you can check its website with http://<ip/name>:9651.
This should show a lot of text lines, scroll down and you should see lines with information about your node.

Then you have to configure prometheus to read this website.
Config file is ‘prometheus.yml’

Insert something like this:

 - job_name: 'Storj'
    scrape_interval: 30s
    scrape_timeout: 20s
    metrics_path: /
    static_configs:
      - targets: ['<ip/name>:9651']
        labels:
          instance: 'Nodename'

Restart Prometheus service.

Login into your grafana server and goto configuration → datasources and add your prometheus server there.

Then goto Dashboards → Import, here you have several options: Upload a json file, download a dashboard from a repository by id or url

Thanks for the explanation but what I exactly asked is what am I doing wrong with:

docker run -d --name=STORJ-1-Exporter -p 9651:9651 -e STORJ_HOST_ADDRESS=99.222.222.44 -e STORJ_API_PORT:14002 anclrii/storj-exporter:latest

The page 99.222.222.44:9651 (example-IP) is not opening and I am getting this log screen:

The log screen doesn’t show any errors.

Perhaps a firewall issue?

Did you try to access the exporter website from the same computer running the docker container?

If the OS is Linux without GUI you can use lynx, a text based browser working from cli.

When you run (on Linux)
netstat -tulpn
do you see any process listening on port 9651?

Its great and working now:

Now I’m trying to run the Prometheus-Docker but unfortunately it doesn’t run. The mount-orders are not looking good, and I’m using unRAID:

docker run -d -p 9090:9090 --restart unless-stopped --name Prometheus -v /mnt/disk1/appdata/prometheus/etc/prometheus.yml:/prometheus.yml -v /mnt/disk1/appdata/prometheus/data:/prometheus prom/prometheus --config.file=/mnt/disk1/appdata/prometheus/etc/prometheus.yml --storage.tsdb.path=/prometheus

Also dunno why the config.file is binded in twice.

In the config.yml I’m using:

#- Here it’s Prometheus itself.
scrape_configs:
#- The job name is added as a label job=<job_name> to any timeseries scraped from this config.

  • job_name: ‘STORJ-11’

    #- Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 20s
    scrape_timeout: 20s

    #- metrics_path defaults to ‘/metrics’
    #- scheme defaults to ‘http’.

    static_configs:

    • targets: [‘localhost:9651’]
      labnels:
      instance: “STORJ-11”

Sorry, i have no knowledge about unRAID

When i search with google for ‘prometheus docker unraid’ the first found is this:
https://unraid.net/blog/prometheus

There are steps without any docker run command needed, simply set ip address and run it.
Maybe try this way or look for help at the forum over there?

I did progress further and am now getting the following error message:

Edit:
–user root:root in extra Parameters helped:

Now I having the issue that it is showing that the job is offline, which is not true:

For some reason the localhost is not working…

Then I have written out the local IP and now its finding data.

I got it run and also installed the additional plugin, but when the node goes offline it is not appearing as offline?:

Hey again ;-),

I’m trying to add new columns to the summary table:

Does anyone know how to make the pattern that it grabs from the Query?

I have also customized on the header values as well.

Does anyone know how to get the estimated Earning in the field 3? Currently it is at hold and this is not that useful:

Thanks and kind regards,