That will probably make things slower… but it’s easy to check. They suggested having more directories (with fewer files-per-directory) but they didn’t actually test spreading the same number of files over more directories.
A node gets larger by storing more .sj1 files in the same config/storage/blobs/[satellite]/[prefix] directories. No more layers of subdirectories.
Storj tailored the algo that splits all the files over those directories to not hold more than modern filesystems support. I think Pentium100 recently posted a 50TB node with no issues with numbers of files. But you’re right that as you simply have more files… all housekeeping tasks take longer. At some point (if you don’t have metadata in RAM/SSD) you’ll spend all your time running filewalkers . Another reason to run around 20TB/node these days?