i was looking for the file walker entries in my logs and was a bit surprised that i was unable to find it there, i did a grep for the terms file or walk and nothing relevant came up…
am i just getting the entry name wrong or can it really be that the filewalker start and finish doesn’t appear in the logs even while running on debug.
i know this is most likely an oversight, because even if the filewalker log entry isn’t relevant for Storjlabs development purposes, then it should still be there for whatever usage people might have of it…
the filewalker is a substantial task that can adversely affect some systems, so it should be logged like anything else relevant, because the whole purpose of a log is to be a map to correlate events to in time, thus one can look at other logs and correlate from say hdd behavior or failures and work back towards finding the cause of the overload lets say.
sure it might not be very useful, nor may it ever be used… but it should still be there… because we cannot predict what kind of issues people will be troubleshooting, which is why anything that can be logged without issue, should be… and since it would only be like 0.00001% or less of the logs, then it makes no sense not to log it.
but hey maybe i’m wrong maybe it just has a different name??
Good idea I think.
@SGC what did you have in mind exactly when you said
So I assume the idea would be to only log whenever the file walker starts, and whenever it’s done?
yes, i was just referring to that logs are good, but it\s not always possible to log everything…
in this case ofc the log entries would be nothing…
basically i want to log everything that can be logged without causing performance or storage issues when it comes to logs, even by default… so i don\t understand how they could leave out the filewalker…
I think this could be useful, and maybe even adding some “intensity” config allowing SNOs to adjust how hard it hits our I/O capabilities.
Edit: Just noticed this thread is from a year ago… Sorry!
there is the option of changing the max concurrent in the config.yaml
this only limits the max concurrent writes… but it’s better than nothing.
got my max concurrent at like 150 to help reduce the impact of filewalker or other system issues.
because the network will never crease, it just keeps spamming an unreasonable number of requests into the thousands, thus basically killing any hope of the system recovering and using all available ram.
tho at times 150 isn’t even enough for normal operation, so i am going to raise my max concurrent to 200, but 150 gives me about a 1-2 % declined uploads.
and i sort of really want all the uploads, just not when its going to overload my system.
sadly there isn’t any better solutions currently, i do agree that it could be very useful with some more elaborate control of stuff like IO, ofc the real problem is how to implement it in a good and easy way.
if you got any good ideas, there is a section of the forum where new features can be suggested and voted upon…