We already have a script any SNO can run, whenever they want, that shows if they’re winning most upload/download races… or have areas to improve. And it’s based on the success of client requests: the actions that directly influence payouts.
I can understand Storj needing internal rankings though: that could help their dev and sales teams.
yea but noone wants to run it, i tried and got some Powershell errors and give up. i rememebr back in days it worked, but then stopped and im currently unable to run any test, so will welcome with open hands any statistic if Storj can run it for me
We don’t have access to your node and are unable to run it for you. Running a storage node is easy. Optimizing it requires additional knowledge and there is no shortcut. We can provide help for setting up grafana or the new benchmark.
You mean the lazy file walker is faster now? How long was it before? This is just a side effect. We didn’t touch the lazy file walker itself. We made the uploads cheaper so that the lazy file walker has more IOPs available to run a little faster.
Yea it’s a lot faster since it doesn’t have to pause all the time waiting for IO. I don’t remember exact figures, but it was > 3 hours for sure on that node (node runs on array that is used for other things as well).
And this is with both garbage collection and trash emptying taking place, and garbage collection was occurring before the upgrade. The read operations are getting a big boost. The intermittency in the reads seems related to when the data is being flushed from memory, not 100% sure though.
Unfortunately I think a side effect of the bandwidth DB change is that the existing prometheus exporter no longer gives nice bandwidth charts in Grafana.
No. I don’t like the idea of running some third party tool that basically gets full access to my storage node. Instead I use the build in metrics endpoint that works without any additional log scraping tool.
Any hints on how to graph JSON data in Grafana? The exporter mentioned here is reading Storagenode API and exposing the data for Prometheus to scrape, but what would be a best way to do the same directly with Storagenode API?