--log.custom-level

Hello, I saw from version 1.100 you can set --log.custom-level. Now to my question, I want to have everything on warning but the filewalker on log. How can I do this in the config.yaml or in the Linux docker command. Thank you for your help

  --log.custom-level string          custom level overrides for specific loggers in the format NAME1=ERROR,NAME2=WARN,... Only level increment is supported, and only for selected loggers!

You can see all names in the change:

Since you want to see only a filewalker, then you may use it as a NAME1 in this option.

thank you . i will see

1 Like

Now the minimum allowed ver. is 99.3. Is it safe to set log.custom level?

i have the 1.100.3 version, this config start with min.1.100.1

I remember it was introduced in 99.X. I will check the versions on Github.
The problem is that when you add the parameter to docker run, you need to rm the container and start a new one, and will revert to the minimum allowed version.

I put piecestore on FATAL, because it logs the lost races too on ERROR or WARN levels.

time  ERROR  piecestore  upload failed ....

What does collector do? It’s the Garbage Collector that we are debugging? Or it’s another walker?
If I put it on ERROR, does it logs the GC start and finish?

time  INFO  collector  deleted expired piece  {"Process": "storagenode", "Satellite ID": "xxx", "Piece ID": "xxx"}

Does this looks OK, after container name?

--log.level=info \
--log.custom-level=piecestore=FATAL,collector=ERROR \

i think yes it is so good

1 Like

I found the answear in the Filewalker status post; these are the walkers and their names in logs:

# Walkers:
# used space filewalker
docker logs storagenode -f 2>&1 | grep "used-space-filewalker"

# Garbage Collector filewalker
docker logs storagenode -f 2>&1 | grep "gc-filewalker"

# retain
docker logs storagenode -f 2>&1 | grep "retain"

# collector
docker logs storagenode -f 2>&1 | grep "collector"

# trash
docker logs storagenode -f 2>&1 | grep "pieces:trash"


# Other loggers:
# piecestore:cache - I believe it's related to used-space-filewalker or it's the USF
docker logs storagenode -f 2>&1 | grep "piecestore:cache"

# reputation:service - audits
docker logs storagenode -f 2>&1 | grep "reputation:service"

# piecestore - pieces uploads and downloads on INFO, lost races on ERROR
docker logs storagenode -f 2>&1 | grep "piecestore"
2 Likes

I think yes, in the worst case it will be ignored.

it deletes the expired pieces (when the customer provided an expiration date during upload).

These log entries appear once per 15 min, even on log.level FATAL.
Is there a way to disable them for linux/docker?
Maybe there should be a logger name for these entries, to be able to set log.custom-level for them.
In one year you get 175000 useless entries.

2024-04-16T00:01:22Z    INFO    Downloading versions.   {"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2024-04-16T00:01:23Z    INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.101.3"}
2024-04-16T00:01:23Z    INFO    Version is up to date   {"Process": "storagenode-updater", "Service": "storagenode"}
2024-04-16T00:01:23Z    INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.101.3"}
2024-04-16T00:01:23Z    INFO    Version is up to date   {"Process": "storagenode-updater", "Service": "storagenode-updater"}

If i set the below - does that mean the other loggers like ‘pieces:trash’ would be on normal logging ?

What would you need to do , to it only log errors for all logs?

Yes, the pieces:trash still shows entries.

This is log entries from the updater. I do not have a solution for this, because they are shared the same config…

You may set the log level to the error or fatal, but I wouldn’t suggest a fatal for any cases.

In info mode with no custom log level and default config, log increases with 1MB/5min, 105GB/year.
In info mode with custom log level and modified config, log increases with 42KB/24h, aprox. 20MB/year with weekly updates.

This is what I use, for docker/linux:

docker run ... \
	--log-driver json-file \
	--log-opt max-size=10m \
	--log-opt max-file=3 \
	--name storagenode storjlabs/storagenode:latest \
	--log.level=info \
	--log.custom-level=piecestore=FATAL,collector=WARN \
...

config.yaml:

version.check-interval: 23h50m0s

With piecestore on FATAL and lazyFW off, I don’t get the FileWalker entries though, meaning start and finish.

New version:

docker run ... \
	--log-driver json-file \
	--log-opt max-size=10m \
	--log-opt max-file=3 \
	--name storagenode storjlabs/storagenode:latest \
	--log.level=info \
	--log.custom-level=piecestore=FATAL,collector=FATAL,blobscache=FATAL \
...

log-driver json-file \

What maked it

I run Synology nodes and the default log driver is db, which I guess it’s a database.
I had to switch to jason, to be able to do that simple log rotation with 3 files of 10MB. Now, I can only access logs through CLI, but It’s OK; easy to copy lines for forum, and I know the log won’t fill the drive.