I took a peak at the logs today and one node had the docker log broken. It ended logging a few days ago with the last line saying something about an unrecognized character “…”. It was like an l or 1.
The simple machine or container restart dosen’t help; you must stop node, rm node, start node, in order to delete all logs and start logging again.
The node works flowlessly even when the log is broken, so you need to take a look at it from time to time, to check if it works.
It took me a second to decode if you were talking about a docker log… or a hashstore log.
The Storj dev who decided to put hashstore data in log* files must have been drinking…
![]()
1 Like
Sounds like the docker log has become corrupted. Dockers default doesn’t rotate or set a max size for log file.
I have those, but in 25MB who knows what could go wrong.
25MB would be lucky if its a full days worth of logs, (If storagenode operating on INFO Logging).
You can:
- Get docker to send logs to syslog (on a linux system) does not cause this issue. Linux will handle log rotation.
- Getting storj to log to a file prevents this issue (mostly) - you will need to setup log rotation for the file. All storagenode logs go to file, os logs remain in docker.
- You can generally use “docker logs --tail=xxxx storagenode-container” to skip past corrupted log entry.
It will have absolutely no effect on the node itself, as it is just a docker display issue.
2 Likes
I am on Synology. 25MB take a very long time to fill. This is what I use:
--log-driver json-file \
--log-opt max-size=25m \
--log-opt max-file=3 \
--name storagenode storjlabs/storagenode:latest \
--log.level=info \
--log.custom-level=piecestore=FATAL,collector=FATAL,blobscache=FATAL \