About speed: My system is not overloaded with 20% hdd busy and less than 10% cpu.
I tried to comment renice but I have the same speed (about 25min per node).
Seems stuck 15min from “storj versions: current larger” to “docker log 720m selected : #235101”
This is the result of debug mode:
./storj-system-health.sh -vq
That script is impressive. I hope it’s not too slow running or load intensive but am keen to get into it. Looks like I could use it on my Debian servers. Cheers.
It is low prioritized when started automatically and on my RPi 4B with 2 full nodes at 10 TB it’s running once per hour for less than a minute. Sure, it requires resources, but I did not recognize an effect on the nodes so far.
The other way round: I am so thankful to have it, I get alerted very fast about any issue and can react before it’s too late (esp. with regard to suspension and disqualification).
How long on average does it actually take? I just ran the forced Discord push via the debug command and all that’s been outputted so far is the following: 372605 (process ID) old priority 0, new priority 19
When I changed the settings files, I set my node folder to /mnt/drive1 (inside drive1 is the data folder for the node). I also just left the log path as / as I don’t know where the log file would even be.
No rush but I am wondering what the detailed letters represent when using the -o option, In this image here. Also, what are the rep up and rep down values?
So you should see your logs with the following command, right?
docker logs storagenode --since 60m
If so, you should have the following setting in the credo file (standard setting), right?
NODELOGPATHS=/
LOGMIN=60
LOGMAX=720
I have a different setting in my case for LOGMAX:
LOGMIN=60
LOGMAX=360
Meaning, if you select less log files, the script will be much faster. Also, if you redirect your logs to a local log file, the selection will be faster than the docker logs command.
In that case, consider to let your script run more often via crontab. For your information, my personal setting here is:
30 * * * * pi cd /home/pi/scripts/ && ./checks.sh
59 23 * * * pi cd /home/pi/scripts/ && ./checks.sh -Ed
Interesting. Can you share some information, how full your 14 TB disk is?
In my case:
HDD1 at 10 TB full completely, no upload traffic anymore → script is running 0:38 minutes.
HDD2 at 9.9 TB almost full, normal upload traffic → script is running 2:10 minutes.
Meaning, I really expect, that outsourcing your log files will increase the script’s speed.
On top of that, please try to limit the selected log amount by changing LOGMAX to 360.