with the V2 network logs were created every twenty four period and closed and saved and a new one would be created automatically I would like to be able to do exactly that with V3.
exports a log file from docker.
docker logs storagenode >& /zPool/storj_etc/storj_logs/2020-04-24_storagenode.log
and this gives you a live log you can follow…
docker logs storagenode --tail 20 --follow
i really should find out how to reset the logs, basically thats all one needs…
Export last 24hour period log to a logfile with date in its name. maybe nodename depending on how much one want to make run automatic.
and then one simply clears the log… and repeat after 24 hours… but i haven’t found a command to clear the logs, i bet its out there… but alas … i can look for it tomorrow… please post it if you figure it out… there are also other methods… ofc
but this doesn’t make your storagenode deviate from default… which is why i like it… it can be kept a completely separate system, and thus can simply be moved with any export or given to anyone else later… hell it even works on docker on windows i would assume xD
thanks but this will not automate the process which is what I’m looking to achieve
it’s a command, you simply ask your OS either windows or Linux to run it daily…
i will assume you are running windows, then you will just need to use task schedule… with that you can run whatever you like in whatever sequence you like… and you just simp,ly put in the command with a variable output for system time / date or whatever… the most basic of scripting or command lines really…
so yeah thats exactly what it will do… else you can use logrotate on linux… but then you will need to move your log location in your docker storagenode run command to direct it outside the container…
which leads to other handicaps down the line…
are you running in a NAS or Windows or Linux?
In Ubuntu I use the “logrotate” command in a cron job to create/compress log files.
First you have to make sure you have added in the “Log Location” and “Log level” fields in the “config.yaml”
Then look in the “/etc/logrotate.d/” folder, you’ll see a list of files, those are the configs used to rotate logfiles ( usually found in “/var/log/”). I just put a “storj” config file in the logrotate.d/ and it’s done automatically
To rotate my logs i use the following config file (edit it to your requirements!):
First line is the location of the log file you wish to rotate
roatate 14 = keep the past 14 log files
daily = frequency in which to create a new log (can be : hourly, daily, weekly, monthly, yearly)
copytruncate = copy log before compressing, and empty the log file contents (used when programs can’t be start/stopped easily)
compress = compress the log file
missingok = do not throw an error of their is no log file
notifempty = do not rotate log file if it’s empty
This will rotate the log every midnight and compress and keep the past 14 days worth of logs.
Read more here:
or have a read through the logrotate man files.
Sorry a noob question but if I use your script edit too my log file location where are the save log file are save too
If you mean the config file for the logrotate service, then you need to provide an absolute path, where you store your logs.
In case of docker version of storagenode, if you provided location for your logs like
/app/config/storagenode.log, then it will put logs to the disk with data, so logs will be located in the data location folder.
You need to use this absolute path to that file in the logrotate configuration.
I.e. if your mapping in the
docker run command look like
--mount type=bind,source=/mnt/storj/storagenode,destination=/app/config, then the path to the logs will be
My file are like this pointet too my storj log file location . What I have problem with are more shere it put the file i move for 14 days
Maybe I do not understand the problem, but the old file will be placed to the same location with renamed extension and then archived. It will keep as many old copies, as you specify. The old ones will get deleted after specified interval/number of old logs.
and it have this files
config.yaml node.log orders/ revocations.db storage/ trust-cache.json trust-cache1.json
just look at the log it look like it have 2 days off logs but thought it will make a new file per day ?
log from 2023-04-23T00:53:18.311+0200
It should create a one log per day (each 24h) and keep 14 copies, if you used the configuration file above.
And also the logrotate service should be installed and running
sudo service logrotate status
it run on my unraid don’t get anything from your command but just with
logrotate 3.20.1 - Copyright (C) 1995-2001 Red Hat, Inc.
This may be freely redistributed under the terms of the GNU General Public License
give me this so it are install I guess also think the compressing are ruining from the script as I can see a exstre load on the cpu
It should be not only installed, but running.
Extra load on CPU doesn’t mean anything, it could be Unraid itself. If you do not see archives with logs in the data folder, the logrotate service is likely not running.
How can I see if it ruining
sudo service logrotate status give
sudo: service: command not found
service logrotate status
bash: service: command not found
Seems this is Unraid-specific, so you need to check their forums.
Just make sure that you config for log rotation is placed to
You may try to run it to check is it working or not:
you can also trigger logrotate manually or with a cronjob
/mnt/storagenode1/logrotate is the file with the config
now we get some
error: Ignoring storj.conf because it is writable by group or others.
just make the file with nano storj.conf and edit the script
I think you should read up a bit on linux basics. The error message is pretty clear on what you need to do. The following command will remove the write permissions for “group” and “others”
chmod go-w /etc/logrotate.d/storj.conf
Thx for the help … will remember next time you ask