The storage node itself doesn’t know if it is started after an update, reboot, system crash or manually. That doesn’t mean anything because my strength is more the code that is already written and not so much the code change needed for this. It might still be a relative short code change.
Same for the ionice level. It can be set in golang. I haven’t seen it in our code and so I am unable to provide any code examples. It might also be a relative short code change.
yeah, but if one did so the default was the filewalker turned off, and then the update of the storagenode software turned the filewalker back on temporarily, so that next node startup it wouldn’t run again.
then the filewalker would end up running staggered like the updates, i know it’s a bit of a patch work method, but thats to limit the complexity of the solution concept, to make it easy to work with.
and like i stated before there are some issues another one i just realize is that, if the filewalker was stopped for whatever reason, then it won’t have finished… which might be why it always runs currently.
to ensure that it never ends in a state where its unaware of the used capacity.
but thats ofc just me guessing.
stuff gets complicated so quickly and often there are reasons things are done a certain way to begin with…
i’m sure if there was an easy solution it would have been done already… or most likely…
its not always easy to see the forest for trees
yeah the L2ARC is pretty amazing for all kinds of things, mine takes on avg 5-10% of the ARC IO
which would have had to come from HDD instead, also isn’t an insignificant amount… because its comparable to 1/5 or 1/10 of my total HDD IO during sequential reads.
and what the L2ARC mostly deals with is random IO, also a lot of configurations one can do for the L2ARC but i generally just keep it at the defaults.
i found that changing the logbias from latency (default) to throughput has amazing results also.
using the commands zfs set logbias=throughput poolname
display the current setting with zfs get logbias poolname
and returning to the default zfs set logbias=latency
i have seen near 10x better performance in some cases, ofc it does increase the latency of the pool, but really for 10x more throughput… then running it on latency just makes it so much slower that everything takes 10x the time and then more work just piles up…
so i rather take the latency penalty and run max throughput.
haven’t really looked at the changes in the latency from doing this tho…
seems to work amazing, but only been using it for maybe 3 months… so i might still run into issues in the future… sometimes it takes a while to find the downsides.
also keep in mind these are rough numbers and haven’t investigated it well…
i think when i saw the 10x results it was on a windows vm, so can’t say if its 10x for storj data… however i initially started using it on storj which was why i ended up trying it on the windows vm disk.
It has become my defacto default setting for all zfs pools, so far with no noticeable ill effects.
oh yeah and remind me again… what is GC runtime?
think you might have told me before, but i can’t remember what it is.
Sorry for the extra work with getting the pull request merged. We are currently chaning our build system and the unit tests are not running as stable as they used to be.
No problem at all! Most of my issues were that I’m not really using Golang on my day to day basis (I’m an Elixir programmer ) but in general it was a nice experience
Is there a way of implementing this in docker compose such that it can be decided each time the command is run? Like docker-compose up -d storjnode --storage2.piece-scan-on-startup=false
and docker-compose up -d storjnode --storage2.piece-scan-on-startup=true
(I know those commands wont work)
Or would I have to use docker run?
I have quite a few, but I was just hoping I could have my config in one place so if I modify it I don’t have to change multiple places. I’ll look into using docker run in conjunction with compose. My compose is rather intricate. My goal is to be able to manually start it without the filewalker (if my server needs to restart), then have a script or cron job restart it later so that the filewalker runs. And have the filewalker be the only difference.
You may use docker-compose run too, however, it will overwrite command from your docker-compose file and port mappings will be ignored, unless you specify --service-ports option: docker compose run | Docker Documentation
Im diving into this and I think it will work!!! If you have multiple nodes you want to start all together, then have run filewalkers staggered, you can totally do it with this. Thank you!
It is in 1.62.3, which is currently being rolled out.
However, those messages are from the storagenode-updater, which will always be there as it uses different configuration keys (and it is totally normal to see them in the logs)
Can anyone see any obvious problems ? Or maybe the IOWAIT is caused by something else than the filewalker.
Here’s a look at my IOWAIT when the nodes restart…