SNO's ColoredLog for Windows/Linux/Mac using GUI/docker

would love to see it. Also could you post an example for those of us that still use the docker based logs?

brights tail awk script i just posted my current adaptation off, will work with longs which you export to from docker using something like my append docker live export cron script.

however the issue with that is that the cron script will run once every 1 minutes at the fastest…
but there might be some work around for that… haven’t gotten around to solving that yet tho…

i suppose you could use the tail command directly on the docker storagenode container log file…
i mean bright’s successrate script gets access to that somehow…
that should make it live updated…

anyways i don’t see why this cannot be fully compatible with docker logs… just a matter of the right adaptation.

i had an inkling about that yes, was very much hack and slash adaptation… had like 6 different attempts before i ended up realizing that i could basically just adapt the one you wrote…

figured it would make sense to start simpler and figure out the commands for RS stuff…
\n did the trick :smiley:

i just can’t seem to place \n after line and get it to work…
wanted to add an empty line below… available space…

ill get rid of the others tomorrow…
hopefully find some time to finish it… and get the whole docker live thing working… don’t really want to give up my docker logs either :smiley:

@cdhowie in a couple of days i should have a script put together that will work simply with docker…
might also look into putting in shutdown procedures on repeated audit failures…

No problem, this is the exact script I’m using right now.

stdbuf -o0 tail -F /volume1/storj/v3/data/node.log /volume1/storj/usb/db/node.log /volume1/storj/drobo/db/node.log | awk '/^==> / {
    if (substr($0, 20, 2) == "v3"   ) a="Syno";
    if (substr($0, 20, 3) == "usb"  ) a="USB";
    if (substr($0, 20, 5) == "drobo") a="Drobo";next} 
    {
      c="";
      if ($0 ~ /download/                ) c="\033[92m";
      if ($0 ~ /upload/                  ) c="\033[36m";
      if ($0 ~ /GET_REPAIR/              ) c="\033[32m";
      if ($0 ~ /PUT_REPAIR/              ) c="\033[94m";
      if ($0 ~ /GET_AUDIT/               ) c="\033[96m";
      if ($0 ~ /delete/                  ) c="\033[93m";
      if ($0 ~ /retain/                  ) c="\033[33m";
      if ($0 ~ /cancel/                  ) c="\033[95m";
      if ($0 ~ /failed|error|ERROR|FATAL/) c="\033[91m";
      line=$0;
      gsub("upload started",          "upload start",   line);
      gsub("download started",        "download start", line);
      gsub("upload canceled",         "upload cancel",  line);
      gsub("download canceled",       "download cancel",line);
      gsub("deleted\t",               "deleted\t\t",    line);
      gsub("About to delete piece id","\tmove to trash",line);
      gsub("Z\t",                     "Z  ",            line);
    }
    {if (line != "") print c a " >\t" line " < " a "\033[0m"}'

If you only want to use a single log for a single node, this would do.

tail -F /volume1/storj/v3/data/node.log | awk '
    {
      c="";
      if ($0 ~ /download/                ) c="\033[92m";
      if ($0 ~ /upload/                  ) c="\033[36m";
      if ($0 ~ /GET_REPAIR/              ) c="\033[32m";
      if ($0 ~ /PUT_REPAIR/              ) c="\033[94m";
      if ($0 ~ /GET_AUDIT/               ) c="\033[96m";
      if ($0 ~ /delete/                  ) c="\033[93m";
      if ($0 ~ /retain/                  ) c="\033[33m";
      if ($0 ~ /cancel/                  ) c="\033[95m";
      if ($0 ~ /failed|error|ERROR|FATAL/) c="\033[91m";
      line=$0;
      gsub("upload started",          "upload start",   line);
      gsub("download started",        "download start", line);
      gsub("upload canceled",         "upload cancel",  line);
      gsub("download canceled",       "download cancel",line);
      gsub("deleted\t",               "deleted\t\t",    line);
      gsub("About to delete piece id","\tmove to trash",line);
      gsub("Z\t",                     "Z  ",            line);
    }
    {print c line "\033[0m"}'

For use with docker logs, you can simply replace the part before the |.
Change this:

tail -F /volume1/storj/v3/data/node.log

to

docker logs --tail 100 -f storagenode 2>&1

I didn’t test these adaptations, so let me know if you run into any issues. I can try to help debug.
Btw, garbage collection (move to trash) only shows if you have the log level set to debug.

Definitely wouldn’t recommend that. Much better to just pipe the docker logs into it and watch the live log update.

If you end the line with \n it may just omit the normal new line you get. So you may need to double up on those to force an added line. You can also try adding a space after it so it isn’t an empty line.

2 Likes

As usual docker would like to say… f u
i never had much luck combining stuff with docker commands… in this case it seems to just totally ignore the | AWK

i tried just running docker logs --tail 100 | awk … but that didn’t seem to help either…
also tried the older version of the script, so it’s not because of an error…

not sure how to get around that, aside from maybe bypassing the docker log command and maybe hooking straight into the log file on the container… or something…

or simply docker log --follow export the logs to a file and then run the awk script as its own thread using the tail command…

atleast so long as one wants to preserve the internal docker container logs.
which many do, tho personally i’m not sure i see the point anymore… i mean the only command i was using from docker was to view my logs like a screen saver anyways…

so if this serves that purpose then i don’t mind redirecting… but to my understanding there are some docker log processing syntax maybe… that people may want to use… but not sure about that…

just how i understood it, if that is the case some might not want to redirect and a solution that works for everybody would be preferred…

ill keep trying see if i have any luck…

Have you tried installing powershell on your Linux and using live log option ? The script does work if you have log redirected to a file or if its a live log. The only trouble is installing powershell.

1 Like

I knew I was forgetting something. You need to append 2>&1 before the pipe. I corrected it in my previous post now.
It should be:

docker logs --tail 100 -f storagenode 2>&1

That’s what I get for not testing…

1 Like

Just gave it a go and it looks amazing !
Thank you guys for sharing this.

1 Like

anyone tried just letting it rip on multiple days of docker logs…

it’s actually quite easy to get an idea about what is going on… even if it’s moving quite fast… takes a bit for it to process a day tho… atleast for my system…

@BrightSilence yeah that docker command makes it work…

still kinda chunky tho… takes like 15 lines and prints in one go… not sure why that is … maybe something to do with AWK?

the regular docker logs --tail 20 --follow storagenode
is very smooth, every log line is updated by itself…

docker logs --tail 20 --follow storagenode 2>&1
works as normal… so it’s when its piped through AWK i tried to run the first Green and Red script… it also ran the exact same way… chunky

I’m not seeing that effect when tailing a file through this script. Docker should be essentially the same thing. Not sure why that would be unless your system is seeing an IO bottleneck. You could try with logs redirected to a file, but I think you might see the same effect.

i tried running the entire docker log for the last 2 days through the awk script… it doesn’t just run through it all in an instant… but does many hours in like a minute…

it was to try and see if there was some sort of bottleneck… from what i can see it looks like something somewhere buffer the log lines and then do them all in sets of 20… ill have to try and investigate a bit…

i’m on debian buster / proxmox

Does the same for me, adds lines by chunks of 15-20.
I’m on Ubuntu 18.04.3 LTS.
I also run tails without the color mod directly on the node.log file and it runs smoothly without any problems.

i give up on that solution… so powershell or redirecting docker logs to a file instead… ofc them i might need to setup log rotate also…

pity tho, it works… everything works… it just doesn’t look nice when it being added in chunks…
i did notice that it seems to be an exact number of lines each time…

rather just tear the entire setup i got apart making a solution i know works rather than hunting for a fix for something it’s unsure if even can be fixed… and to me it’s now irrelevant to avoid log redirection for docker containers.

@BrightSilence
tried to redirect the log files from docker and then run the tail command on the log file…
still the same result… is it possible that we don’t have the same version of AWK?

from my digging around in AWK commands and syntax i found that there exists a ton of different AWK versions GAWK, NAWK… or something like that

@TheMightyGreek
so yeah don’t bother doing to log redirect, since it doesn’t help one bit…

docker logs storagenode 2>&1 > storagenode.log is not a log redirect. It should be done other way around: https://documentation.storj.io/resources/faq/redirect-logs

exactly, which is why i wrote it… i stopped using that and tried to do the correct docker redirect of the entire log file… and still it has the same issue… it’s most likely either something related to debian or AWK

Since you call it a docker redirect, I’m still not sure you did it right. The correct way to do it is with the instructions Alexey linked. Which would have the node itself write logs directly to the file. There is no docker involvement at all.

yeah i did that… and it didn’t change anything… just didn’t call it the right thing i guess… it was a log in my docker, i redirected it using that config.yaml which is annoying because the damn folders need to be accessible inside the container… so now my logs are on the storagenode pool again…

going to change it back tho… because it didn’t fix the problem…

do you think it can be because of either debian or maybe different awk version…

what os are you running on since you don’t get that 15-20line added pr log update when using the awk script…

or do you maybe run the awk script from a file… thus far i’ve only tried running it from just pasting it all into the terminal.

the awk script file and maybe running it directly on my servers monitor will be the next tries to fix it…
really annoying that it updates in chunks tho… kinda ruins the whole nice flow of when the log is running on screen

I don’t. I think it’s more likely performance related, I notice this sometimes happens when my array is really busy, but it’s very temporary. But it can’t hurt to compare. I’m using what comes with Synology DSM.

$ awk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.3, GNU MP 6.0.0)

I am running it from a shell script, but that really doesn’t matter.
I’m also using tmux to keep my session alive remotely when I disconnect. That could have some impact too.

can’t be performance, i tried to parse like 100 mb of logs through the script and it does that with little issue… took a couple of minutes and thats like 48 - 72hours of logs

else my system and pool is like at it’s usual 1-4% utilization, so performance related :smiley:
has to be some program, version, buffer …

interesting tho… i cannot even do the awk --version
just complains that it isn’t an option… O.o

i’m using putty… is tmux like screen?
a virtual terminal window manager thing… going to look it up

something matters… else it wouldn’t be different…
well thanks for the info… going to have to dig deep into this… maybe try the powershell thing
maybe get more people to try the script, see who has the issue…