Prometheus Storj-Exporter

ah - then i dont think i can access the storj api? how do i confirm that? or make sure its working?

What am i doing wrong? the exporter is using the nodes internal ip it seems to be correct?

You have to link the container or use the local IP with port of your storaganode…

I belive i am using the local ip with the correct port?

How do i link the container? :smiley:

This is the way to link the ip and port:
docker run -d --restart=unless-stopped --name=storj-exporter -p 9651:9651 -e STORJ_HOST_ADDRESS=192.168.1.101 -e STORJ_API_PORT=14002
anclrii/storj-exporter:latest

And this one is a way to link container:
docker run -d --link=storagenode --name=storj-exporter -p 9651:9651 -e STORJ_HOST_ADDRESS=storagenode anclrii/storj-exporter:latest

Im sorry i seem stupid: i just dont understand. i can see all the data on the ip: http://127.0.0.1:14004/
which is my 4TB storj node.

Running this command:
docker run -d --link=storagenode4TB --name=storj-exporter4TB -p 9651:9651 -e STORJ_HOST_ADDRESS=127.0.0.1:14004 anclrii/storj-exporter:latest

Does yield this page:

But the data is just not there?
What is wrong?

It should be:
docker run -d --link=storagenode4TB --name=storj-exporter4TB -p 9651:9651 -e STORJ_HOST_ADDRESS=storagenode4TB anclrii/storj-exporter:latest

Thanks for the effort! i have no idea why but it does not work:

Its states the storj port is 14002 but that 4TB node is running on port 14004.

when i run this exact script:
docker run -d --link=storagenode4TB --name=storj-exporter4TB -p 9651:9651 -e STORJ_HOST_ADDRESS=storagenode4TB anclrii/storj-exporter:latest

I get a page that cant load. so 100% not working.

i tried adding:
docker run -d --link=storagenode4TB --name=storj-exporter4TB -p 9651:9651 -e STORJ_HOST_ADDRESS=storagenode4TB:14002 anclrii/storj-exporter:latest

That does return a page but the data is not correct. and docker still seems to belive my nodes port is 14002.

Its weird. How do i know for sure i have open up to the storj api?

Then try like here written with IP and Port but for your storagenode :slight_smile:

I am not sure, and don`t want to experiment with my node, but I think the storagenode must run with ports opened to all networks.You anyway not open that port in your router, so this way it will be available in your home network.This line do that.

-p 14002:14002
Other thing is that make sure you don`t change the second port in that command, only the first.
After that you need to use internal ip of that machine, with the port of your choice.
And then try to use that examples from upper post.Debug it.Look into it.Change port here and there, and you will have a sucess
I hope you understand the main problem.You need to allow that 1400x port to all networks.
And try again with the examples, suited for your setup

Thanks - its up and running!

Quick question - can i rename the nodes in grafana? so instead of: xxx.xxx.xx.x:14002 (ip of node)
i can change it to: storagenode4TB?

image

In your scrape config you can configure a label:

scrape_configs:
  - job_name: 'storagenode'
    static_configs:
      - targets: ['storj-exporter-2.1:9651']
        labels:
          instance: 'storagenode 2.1'
1 Like

Can I use exporter in a multinodes machine with about 100 nodes? Too load for the server?

I love this Dashboard, but I’m missing one thing and didn’t figure out how to do it…

How can I show the Suspension as % not only the checkmark?
Maybe @greener or @kevink or someone else can help me out there?
I don’t understand the JSON of Grafana. I’ve got the right query, I think, but I can not add the %-value instead of the checkmark.

Okay, I just figured out
image

Im having problems with the exporter, all seems ok but logs say " Read timed out"·

In second node all is working fine

Type in Browser: http://YOUR:IP:9651 what is the result?

Hello. I have the same problem. When I go into this address I need to wait approx 1-2 min to get results.

Are the DBs on SSD? SMR or CMR?

It’s “normal” when DBs are NOT on SSD. And SMRs also realy slow…

DBs are on CMR HDD.
20 characters.

Then think about moving DBs to SSD. That’s what solved it for me