7 - It doesn’t make any sense for me.
Nodes may not be able to process BF in time indeed. But it happens due to slow disk I/O, which requires reading properties and writing millions of files for each GC pass.
But the number of files processed DOES NOT DEPEND on the size of the BF. Regardless of the size of the BF, the GC goes through ALL the files stored for this satellite.
The only difference is that bigger BF has a lower false positive percentage and due to this will delete a slightly larger part of the files it checks.
But this is a relatively small difference.
Let’s say a 10MB BF checks 30 million files and deletes(moves to /trash/) 3 million of them (a total of 33 million file operations)
A 30MB BF scans the same 30 million files and deletes 5 million of them due to more complete garbage collection (a total of 35 million file operations).
The difference in terms of disk I/O is less than 10%. And these additional disk operations will still need to be performed sooner or later - with small BF, it just happens later, after a few more filters.
The big difference of large BF lies in the significantly greater use of RAM and CPU computing resources. But this is not a problem for the vast majority of nodes. I do not see any significant load on memory and CPU from the GC running, even on my oldest node (with an AMD Phenom II processor produced about 15 years ago). With “lazy” mode enabled it very easy to check/track as it runs in a separate process. The bottleneck is always disk I/O despite the fact that this node use is a new server class HDD dedicated to Storj and SSD for DB + orders + logs. It’s just that hard drives are slow, their performance has hardly changed for several decades now. And the specific performance (performance divided by volume) is constantly decreasing.
So it looks like the bottleneck for larger BF is memory and processing power only on your side - when generating larger filters. Not on nodes.
In fact, you admit this implicitly in paragraphs 5 and 6 describing the difficulties (real ones!) of scaling up to larger BFs on the satellite side. And then you try to deny it in paragraph 7, trying to push the responsibility for not having larger BF onto the nodes.