Sorry, unfortunately I can’t create new topic, but I would like pay you attention on SNO dashboard performance (docker exec -it storagenode /app/dashboard.sh)
Yes, I agree, I just pay attention for it because our topic is “Storage node performance improvements ideas”, and it not a issue.
I believe new SNO board that will release soon will mach better.
I use netdata, it simple and powerful monitoring and troubleshooting tool.
You can read more about here
Instalation is very simple: bash <(curl -Ss https://my-netdata.io/kickstart.sh)
After installation is complete, you will have everything out of box, no need additional configuration.
Only one parameter I changed on /etc/netdata/netdata.conf history = 86400
For keep one day history, it will cost about 522 MB of memory. If it on RPi or other system with 1GB of RAM please do not change this parameter and after investigation please stop netdata service for keep more RAM for storagenode.
I’m seeing the same impact. In light of these learnings, I would love to have an option to only update the dashboard once a minute or so. --update-interval 60 Something like that?
Thanks for the heads up; I know that our issue tracker ins’t open however having the link here helps to easily identify that this problem reported in this thread is already tracked.
I’ve noticed my nodes seemed to run OK for a while but after a few hours or days would notice memory consumption would grow and swap usage and disk wait times would increase (seemingly due to increased swap usage). Originally I put this down to overloaded system but I’ve now got a strong suspicion this is due to the dashboard script and that when exiting this it continues to run and consume resources. I noticed this as I started to spot multiple dashboard processes running in the docker instances even though I had terminated these and was no longer running the dashboard. Perhaps it is because I am Ctrl-C’ing out of the dashboard to quit but not aware of any other way to quit. Everything seems to run much better when I do not run the dashboard (or at least I have to restart the node after running the dashboard). Any thoughts?
I’ve noticed my nodes seemed to run OK for a while but after a few hours or days would notice memory consumption would grow and swap usage and disk wait times would increase (seemingly due to increased swap usage). Originally I put this down to overloaded system but I’ve now got a strong suspicion this is due to the dashboard script and that when exiting this it continues to run and consume resources. I noticed this as I started to spot multiple dashboard processes running in the docker instances even though I had terminated these and was no longer running the dashboard. Perhaps it is because I am Ctrl-C’ing out of the dashboard to quit but not aware of any other way to quit. Everything seems to run much better when I do not run the dashboard (or at least I have to restart the node after running the dashboard). Any thoughts?
Thanks for getting back to me @Alexey. The problem I am having is that the dashboard continues to run when I control-C out of it and continues to consume resources. When I want to look at the dashboard again it start a new instance and consumes even more resource. How do I exit it properly so it quits and stops consuming resource?