Hi! I have a problem with my node. The node is generating an absurd amount of logs. Node started to crash every few hours and the log file size when the crash happens is about 20-30GB. I cannot open logs since the file is just to big. How to know what is happening?
Operating system: Windows
Node version 1.109.2
Wow! What is the log level on your config.yaml file?
Info is the level of log.
JWvdV
August 11, 2024, 7:50pm
4
What’s the content of the last lines of the log?
How did you install it on Windows, is it running in WSL or docker by any chance?
cannot open the file since its to big when it crashes, I installed it with Windows GUI.
JWvdV
August 11, 2024, 7:55pm
6
In Powershell:
Get-Content ./log.log -Tail 10
thanks will use it when it crashes
Use it now. If the log is groing so quickly it may be throwing a ton of error messages already and it might be useful to know what those are…
Solu
August 11, 2024, 8:28pm
9
Just in case you need more log level, there are also the following levels available:
But you should check the cause of you node like @JWvdV descibed.
2 Likes
Stez
August 11, 2024, 8:31pm
10
I noticed large log files in Windows as well, although it doesn’t have to signal an issue. It might, but a large file can be normal. With log-level info (default) every upload and download gets logged. Especially with the test-data lately, log files can grow rapidly. In my experience by 2GB+ per day, simply due to the traffic.
Not ideal, but simply stopping the storagenode-service (through the Services-app in Windows), deleting the log file and restarting the service would keep it clean…-ish. I don’t recommend it, as every restart would signal a filewalker with added I/O.
1 Like
Alexey
August 12, 2024, 3:27am
11
You may also change the log level on the fly without restart, using the debug port:
We have integrated a package called monkit in all storage nodes. It is very usefull for debuging. By default it will open a random port.
In order to connect to the debug port you will need to:
Add debug.addr: ":5999" to the config file. Now the port is fixed and will not change on every restart.
Add -p 127.0.0.1:5999:5999 to the docker run command.
Ready for action!
You can request the following informations:
curl localhost:5999/mon/funcs
Find out how long a function like GetExpired need…
You may configure a logrotate for Windows, the config file for logrotate can be like this:
are you running in a NAS or Windows or Linux?
In Ubuntu I use the “logrotate” command in a cron job to create/compress log files.
First you have to make sure you have added in the “Log Location” and “Log level” fields in the “config.yaml”
Then look in the “/etc/logrotate.d/” folder, you’ll see a list of files, those are the configs used to rotate logfiles ( usually found in “/var/log/”). I just put a “storj” config file in the logrotate.d/ and it’s done automatically
To rotate my logs i use …
or you may use a custom script like this:
Hello,
Unfortunately, in the current windows version of the Storage Node there is no way to rotate or limit the size of the generated logs files. You cannot delete them from the GUI because they are locked by the services processes. Using Clear-Content in PowerShell to reset the files works but there is no easy solution if you want to keep a few days / weeks of logs.
I have written a small PowerShell script that can rotate the Storjs Logs without the need of Stop>Start the services to release …