I thought the lazy filewalker just allowed other HDD IO (such as node ingress/egress) to take priority? To me the node should always have priority over housekeeping tasks no matter how fast the hardware is. That would make using “non-lazy” filewalker a crutch to get runtimes down… at the expense of gimping your node while it runs.
What do you mean by running a node “more efficiently”? If normal filewalker occasionally competed with node IO (and perhaps loses you a race) wouldn’t that be less efficient?
Filewalker runtimes boil down to raw IO capabilities of drive, metadata caching by memory/SSD, and competition for access to the drive. Don’t make your node fight other processes for access. Unless you’re trying the new BadgerDB features (which don’t support lazy-mode yet)… using normal filewalkers is doing it wrong
TL;DR; Use ZFS metadata devices to make metadata access fast, and leave filewalkers as lazy