We are all on 1.108 now. In our nodes we have a really important improvement disabled by default and ready to launch… badger cache!
Can we talk about it? Are you testing it? Is it safe to enable to all nodes? Is it better to wait?
My body is ready…
Send. It.
Yes
Yes
No. It is flagged as experimental. If something goes wrong you can nuke the cache directory and rebuild it. Anyway it still is a risk that can cause trouble and requires manual fixes.
That is your decision. Let’s say this new feature slows down your node instead of a performance gain. Would you still want to try it out? How are you going to measure the performance gain? If so than go for it and share your findings. If not better wait for others to try it and share results.
How can i turn it on for test?
If you are asking, don’t do it. In case stuff go wrong, you might need to read the source code to try to revert the damage.
pieces.file-stat-cache: badger
in the config
I enabled it on one node. I will be happy to share my results with yours.
So far I have been running the badger cache in one of my nodes without any issues, but as its still an early feature I dont believe its a good idea to deploy to all nodes. If you have multiple nodes, you could do the same and experiment with it.
At some point I will try to do some testing of any performance benefits of the cache but so far I have only been running it to test for stability.
Side note: It would be helpful to allow the badger cache to have a configurable folder (similar to DBs) so that the cache can be stored in an SSD similarly to how it can be done for the databases. I understand that this feature is still experimental so this might be in the roadmap for later on.
you also need add this one
pieces.enable-lazy-filewalker: false
, because of this
2024-07-28T13:50:31+03:00 ERROR failure during run {“error”: “filestat cache is incompatible with lazy file walker. Please use --pieces.enable-lazy-filewalker=false”, “errorVerbose”: “filestat cache is incompatible with lazy file walker. Please use --pieces.enable-lazy-filewalker=false\n\tmain.cmdRun:65\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:393\n\tstorj.io/common/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}
Does anyone know why the lazy filewalker isn’t compatible with this badger/filestat cache? @Devteam?
Because it needs the cache exclusively. The lazy filewalker spawns a new process and there is no guarantee that only one is active. A TTL cleanup would spawn while garbage collection is active and that’s game over for the badger cache.
Why, I mean, if it’s a database then it shouldn’t be s problem to access with more than one process at the same moment?
Besides, wouldn’t it be wiser to run this lazy filewalkers sequentially anyway?
tried on my node. with both command in.
wasnt able to start the node up successfully
Speaking of badger cache, I understand that it currently resides on the node’s HDD and cannot be moved to an SSD as we do currently with the SQLite databases.
Is this by design, due to any limitations or will it be possible to move to SSD in future node software versions?
It can be moved wherever you want if you are running your node with docker. I am doing that, I just mounted a path on my ssd to the badger directory.
Ah, that’s good to know. Clearly I misunderstood.
Thank you
What error did you get?
Like Vadim and others I’m running it in Windows directly so that’s not available to us…but would be nice to be able to shift it to SSD. Heck, even placing it in same directory as the DB directory would be nice
I suppose you have the option of running over Docker on Windows, don’t you?
Run node in windows in Doker it like shoot sprinter to one leg.
When I tried with docker it just didn’t play nice and had issues. Maybe it’s gotten better since, but I don’t really want to be the guina pig or open that can of worms again if I can avoid it.
If it’s a massive code undertaking sure, not necessary, but would probably help improve the windows nodes if/when badgercache becomes a default/confirmed stable