Tuning the filewalker

These are all my test results regarding Filewalker. Now we are at 1.96.6 version and the tests are done with older versions.
Test machines are dedicated for Storj only; Synology DS218+ 1GB RAM and Synology DS220+ 10GB and 18GB RAM, ext4, noatime, no RAID, no cache, no SSD, all the storagenode files and OS are on node’s drives. Log level is on “fatal”.
The 1GB sys has 2x Iron Wolf 8TB, the rest have Exos 16TB (big nodes) and 22TB (new nodes of 700GB).
Writecache is enabled. All drives are formated as 512e, exept the new 22TB ones. I learned too late about fastformat to 4Kn. The nodes were not full at test time, so they had ingress and egress.

FW run increases somehow exponentialy with occupied space, not linear, so if you have lets say 1h/5TB, you won’t have 3h/15TB, but way more.
Anything that reduces the I/O on drive benefits the FW run.
CPU and internet access and speed are not important.
The biggest influence on FW run time it has the RAM and any sort of cache for matadata, etc., at least for Linux, because it utilises the entire memory available for buffers and cache.
Next will be the ingress. If you cut the ingress by reducing the allocated space below the occupied space, FW run will go much faster. It reduces writes, and increases reads.
I did’t messured the effect of the log level, but of course, the info level will impact negatively the FW run.
The smallest influence on FW out of main factors it has moving db-es on other medium, like SSD or USB stick. Is not like the others listed above but it helps. And of course, you get rid of db lock errors.
Apart from these factors, the lazzy FW takes aprox. 50% more time than normal FW.

RESULTS (in the order I did them, in a span of 1 year, on different nodes):
A. RAM test, lazzy off:
18GB RAM - 9.52TB - 7.5h, 0.8h/TB - 1 node
10GB RAM - 8.85TB - 29h, 3.3h/TB - 1 node
1GB RAM - 4.65TB + 4.03TB - 58h, 6.7h/TB for both nodes

B. LAZZY mode test - node full (14.5TB, so no ingress)
18GB RAM - 13.1TiB, lazzy ON - 57 hours, 4.35h/TB - 2 nodes running
18GB RAM - 13.2TiB, lazzy OFF - 39.5 hours, 3h/TB (3% online score lost ?) - 2 nodes running

C. Testing one node of 11.4TB, with a seconde new node running in parallel:
LAZY OFF, DB-es on HDD:
Total run time - 43h, 3.77h/TB (6h no ingress, internet down, IOPS Reads x3)

LAZY OFF, DB-es on HDD, no ingress (Reads IOPS x3 than normal):
Total run time - 29h 40min, 2.6h/TB

LAZY OFF, DB-es on USB 3 (Samsung Bar Plus 128GB):
Total run time - 43h, 3.77h/TB

During FW run:
USB Read peak speed 190KB/s, IOPS 80
USB Write peak speed 700KB/s, IOPS 52
Utilization 11% max

After FW run:
USB Read peak speed 150KB/s, IOPS 34
USB Write peak speed 1024KB/s, IOPS 74