Time to redo the logs a little?

Maybe we could optomize the log levels? So that I can filter only warn or error instead of just general info constantly (GB’s per node).

This option is called log.level:

  • info - default
  • debug - info + some debug information
  • warn - only warning messages and errors
  • error - only errors and fatal errors
  • none - nothing logged

You also have a presets:

  • log.development: true - will turn on all useful information for developing, include stack traces.
  • log.stack: true - will turn on logging stack traces
  • log.caller: true - will turn on logging callers (functions and line numbers) as well
8 Likes

Thanks for some reason I remember playing around with the log levels before but it not working well. Guess I’ll have to give it another shot.

i wouldn’t turn off to much log information, i would rather delete the logs after say 1 week or a month or a year… its after all only like 0.01% of node capacity that it takes and it does help you find out what went wrong… but thats just my personal preference…

i rather have more details from a relevant time frame, rather than mostly useless information over 6 months…
ofc sorting out just all the info would save like 99% and one could most likely log decades worth information without using 1% of total node capacity on it…

also you can compress logs pretty much… i’m running a 2.8x compression via zfs alone…
i think logrotate and pkzip or whatever it’s called will give one like 10x … i got two months+ worth of logs and they take 4.9gb uncompressed 1.78gb compressed and in theory if i used an external compression on all of it, instead of letting zfs compress each record, then it would be like 490mb for a 14tb node over 2 months +

so with current activity level it would be 3gb logs for a year and thats logging everything… which is out of 14000gb making it 1/500th of node capacity… to ensure one can easily track down errors after the fact…

i would say compression and deleting old logs is the way rather than skimping on the details of the logs, because you can never get more details after the fact… you can always delete excessive logs if you don’t need them… only reason i might see as an advantage to running minimal logging would be to save resources…

not sure how relevant that would be either… but in some programs it could be a significant part… also duno if having more nodes give more logs, i kinda doubt it tho…

i do like that my logs are just in a compressed zfs dataset, it makes it nice to work with since i often do past processing on them… but i’m sure with the right programs that isn’t a problem either using compressed … i want to say zip files but thats so windows :smiley: - Insert linux compression scheme here-

You need to restart the storagenode after the change, unless you published a debug port too, in such case you can change a log level on the fly:

1 Like