Disk usage discrepancy?

Patches are welcome.

We know that on Linux it’s going “fast” if metadata fits in RAM, and “slow” otherwise, with the threshold depending on the settings of file system used. I managed to optimize my nodes so that the file walker takes around 8 minutes per terabyte of pieces in the “fast” scenario, and I found it good enough for myself. This was discussed on the forum many, many times, with so far nobody taking the challenge to writing reliable code to improve the situation. Maybe Mad_Max will take this challenge?