New 1.99 ver is spoiling us with goodies

Wow! So many goodies in the next update! :heart_eyes:

4 Likes

Does this all add up to a more reliable/resumable way of deleting unwanted files? And the space-used numbers in the UI should become more accurate?

Some improvements are already committed, but more improvements will arrive in the next release:

For example:

8 Likes

image

This would allow to decrease the priority of any logger.

Let’s say you see a lot of log entries in your log, like this:

2024-03-01T08:53:06Z	INFO	piecestore	download started	{"process": "storagenode", "Piece ID": "TNWCFKCOAYES2K257VXZRSEPVVE56UVCO3LCELL4BQQPRYCCDU6Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR", "Offset": 0, "Size": 36864, "Remote Address": "10.42.0.1:29599"}

The logger name is piecestore here, and it prints a lot of data on INFO level, which you may not be interested about. You can set the log level only for this entries to WARN.

Just use the following command line flag (for the storagenode run command):

--log.custom-level=piecestore=WARN

All other loggers will use INFO level.

You can adjust the levels of multiple logs:

--log.custom-level=piecestore=WARN,contact:endpoint=ERROR

NOTE: you couldn’t increase the priority (use DEBUG instead of INFO), as log entries are already filtered based on the generic log level setting.

6 Likes

store trash under per-day directories

Would it be possible to display that in dashboard and/or the API? Something like trash/day0, trash/day1, trash/day2 etc.
Then one could foresee how much space is going to be freed and when.

3 Likes

inb4 all the “My node suddenly deleted [many]GB of data - what is happening??” posts.

I agree with you, @snorkel - 1.99 has many great points.

Good job devs!

4 Likes

Omg, that sounds SO nice! :slight_smile:

1 Like

Isn’t the logic reversed? Setting it to warning will still display it, since it is a higher level (at least that’s how logs work industry wide). If for example I want to get rid of this (completely) useless message “ERROR piecestore upload failed” and my logging is set to warn, I should set the piecestore level to info, shouldn’t I?

2 Likes

@elek Maybe it can be shown in bandwidthDaily.delete section of the SNO api. Since currently this key:value pair is never used.

image

I was thinking the same thing; I tought I miss something.
I believe we should decrese the log level to warn or error, and decrese the tag for important messages like services, filewalker, etc. from info to error or warn.
And for the unwanted errors, we should increase the tag to info.
We’ll see some tests when the new ver. is installed.
So, as I understand it, the tags are changed by the run tag after hitting the general log level filter?

Maybe someone can post a command with the useful stuff on and noise off.

Did you already change the telemetry topic? Is it possible now to turn it off? :slight_smile:

1 Like

Yes, set the hostname and/or the interval to zero.
see

If you set the level of a specific log to WARN, then all the detailed levels will be ignored, which includes INFO and DEBUG.

In this specific case (piecestore), we have detailed log on INFO level. I turned them off with using the WARN level.

1 Like

This new setting is just a filter. If you set the global log level to WARN, you couldn’t set the individual log levels to DEBUG, as they won’t be logged anyway…

So it’s not working in the opposite direction (it’s the limitation of the zap log library what we use, but I can live with it for now…)

1 Like

The delete RPC is not used by normal uploads, but protocol implementation is still there, I would not remove that number. (specific uplink implementation may use the delete call)

Printing out the space usage of daily trash directories is a good idea. However implementation should be smart (we certainly shouldn’t calculate the space usage again and again, so probably it should be persisted to the db…)

But with using daily directories you can also check it OS level tools. (for Linux, I would recommend duc or ncdu)

2 Likes

So what you are saying is that the new tunables don’t actually change the severity of the logs, just trim out existing logs based on their severity regardless of the global (log.level) setting.

I would say that the new tunables also changes the log levels.

Imagine two gates where incoming log entries are filtered.

First filter is the global log level. Let’s say it’s DEBUG, which means all the logs with DEBUG/INFO/WARN/ERROR are allowed to cross the gate.

Second filter is per logger. If it’s INFO, it means that for that specific logger, only the INFO/INFO/WARN level metric will be allowed (DEBUG is filtered out).

But you can do it in the opposite direction. If first gate is INFO, the second gate won’t see any DEBUG level entries, because they are filtered by the first gate.

It’s not very intuitive, but this is what is given for us :wink: :wink:

If you are interested, here is the discussion about the implementation details: Proposal: Add zap.WithLeveler(leveler ...zapcore.LevelEnabler) ¡ Issue #763 ¡ uber-go/zap ¡ GitHub

TLDR: if you are not interested about per request logs, I would recommend to use log.level=INFO (default) and --log.custom-level=piecestore=WARN (which removes the INFO level piecestore log entries.

5 Likes

Ouh, I think I got it now! It’s not that line’s tag that is changind, but the log level for that line.
It’s very counterintuitive, but now it makes sense.
So if you have the log level at info, a line (piecestore, etc.) as info, and you set warn for that line, it means that you set the punctual log level for that line to warn, and if ipotheticaly you will get a “warn piecestore” line, it will be logged; but the rest “piecestore” lines remain with info tag and are not logged.

1 Like

Yes, exactly.

The original log levels are hard-coded in the source code, they couldn’t be changed (only with patches). But we can filter out lines which are less exciting :wink:

2 Likes