Unable to add nodes to the multinode dashboard

I am trying to add my nodes to a multinode panel and there seems to be some kind of problem with the configuration but I don’t know where exactly the problem might be.

If I execute the instruction put here (changing the name of my container)
docker exec -it N001 /app/bin/storagenode info --identity-dir identity --config-dir config
it seems to get the information from the config.yaml file which is not the one it is running with as I pass the parameters through a docker compose

maybe this is why there are other containers in which I get an error like the one I attached on screen

beyond getting the api key and the correct data shown when launching the query or those in the docker compose, when trying to add them to the dashboard they are not added and it does not show any error message.

I have other nodes added which are also running in docker under unraid and also under docker compose with a very similar configuration.

Can the problem be in the config.yaml? If I have all the basic configuration in my docker-compose file is it safe to delete the config.yaml? The dbs were regenerated not long ago in case I could correct the problem but it has not been the case.

Hello @asturking,
Welcome back!

This command must not throw the error related to the databases, unless you changed something in the config.yaml file. By default the parameter storage.path must not exist for the docker nodes, in that case it would point to the default location - config.

This is usually mean that the address doesn’t match the API key (e.g. you provided a wrong port or the address of another node). Please make sure that you adding the address and port of the node (e.g. 28967), not the dashboard port (e.g. 14002).

1 Like

storage.path is not modified into the config.yaml just into the docker-compose file.
This is the config that I already have, same for all the docker services. Not all dockers have the same content into the config.yaml

x-default-node-env: &default_node_env
  WALLET: "0x00000000000000000000"
  EMAIL: "mail@mail.com"
  STORAGE: "7.9TB"

x-default-command: &default_command
  - --storage2.database-dir=/app/dbs
  - --pieces.write-prealloc-size=2.4MiB
  - --filestore.write-buffer-size=2.4MiB
  - --storage2.piece-scan-on-startup=false
  - --pieces.enable-lazy-filewalker=false
  - --storage2.monitor.minimum-disk-space 1Gb
  - --storage2.trust.exclusions=1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE@saltlake.tardigrade.io:7777

x-default-logging: &default_logging
  driver: "json-file"
  options:
    max-size: "10m"
    max-file: "3"

services:
  N001:
    image: storjlabs/storagenode:latest
    container_name: N001
    restart: unless-stopped
    ports:
      - 28967:28967/tcp
      - 28967:28967/udp
      - 14002:14002/tcp
    volumes:
      - /mnt/disk001/storj/001/identity:/app/identity
      - /mnt/disk001/storj/001/config:/app/config
      - /dbs/001:/app/dbs
    sysctls:
      - net.ipv4.tcp_fastopen=3
    logging: *default_logging
    command: *default_command
    environment:
      <<: *default_node_env
      ADDRESS: "xxxx:28967"

checked and double checked if the url is, where the daemon connects to the network, not the dashboard and I don’t get anything

I think the problem is that the command is taking into account only the values in the config.yaml file and not the options you are running with.

I think I’m going to have to take some time to edit all the config.yaml files and write the dbs path settings.

Probably there are some options like storage2.piece-scan-on-startup that are not usefull at this moment, I have been keeping them for a long time

Then you need to provide all these options to the /app/bin/storagenode info --identity-dir identity --config-dir config command.

You are correct, so thus you need to provide all these options too. Maybe only to add --storage2.database-dir=/app/dbs would be enough.

Solved

I finally had to edit all the config.yaml files to add the line

storage2.database-dir: /app/dbs

I think it’s a bit weird because it’s the default value that the system uses, the only thing I do is to pull the dbs to another ssd disk to avoid unnecessary iops.

By the way I have updated the

contact.external-address

value on all nodes to export the information in an easy way.

I tried to export the data in json format to import it directly as indicated here but it didn’t work correctly, so I will have to add the nodes by hand one by one because the multinode and the storagenodes are in different computers.

By the way, the multinode panel becomes very very slow as you add more storagenodes even though the dashboard files are stored in an nvme.

And I know it’s not a priority product, but if storj give it a facelift and make storagenodes integration easier, more people would use this panel.

For example to add a button to export the api from the web dashboard of the nodes, an error management in the multinode dashboard, to be able to move/search/filter the nodes that are configured.

And improve the performance

I don’t know, these are ideas that I can think of.

Nobody uses dashboards or multinode dashboard for anything. It’s not a priority. It’s useless. Investing any amount of efforts into it will bring negative return.

If you want to query nodes data you can use their api. But why would you? Storj manages the nodes. You just need to feed them internet and disk space. There is nothing to watch unless things break. And when they do — you would go check logs.

I don’t see any usecase where the dashboard is useful.

A combined total of current used-space, and graphs showing it change over time, is useful. But as for maintenace: yeah: you can pretty much wait for the “Your Node has Gone Offline” emails to tell you when something went wrong.

How are you using this information?

I can see how much space is used in my OS disk monitor (e.g. zpool list) and adjust quotas when I receive an email that I’m approaching the currently allocated one. What else is there actionable in the graphs?

What do you mean? Your ‘zpool list’ is actionable: but the same number in a graph isn’t? If anything the report would show rate-of-change while an instant value would not.

Rates are a quick and easy way to see if something will become a problem soon (thus needs action) or if it can be ignored for awhile longer :wink: . Grafana makes that stuff easy.

Edit: Plus you can watch the soothing waves of garbage collection wash over your nodes: :ocean:

Right, but I already use zpool list for all other purposes too, and the graph will be only relevant for the node.

But this information is noise, not actionable.

What rate or what combination of circumstances will indicate an impeding problem there? I’m seriously asking, not trolling.

I see the following actionable events:

  • Disk is going to get full soon
  • Node lost connection.
    -… That’s it.

Those two are the only events that prompt some action from me. What else do you watch?

Lol. I prefer to look out of the window on the treetops to give my eyes a bit of a break :slight_smile:

1 Like