No need for citation, because you’re answering it already: file deletions it a lot of meta data writing (unlinking the inode, updating the freemap, sometimes a reshuffle of indexes/H-trees in ext4). We’ve some topics around here, where they complain about deletions taking up to seconds per file in ext4. Which is being done during filewalking, so the inide info should be already in the cache. See for example How do you solve slow file deletion on ext4? - #33 by JWvdV
Can you do again the same, with the same disk without cache? This doesn’t actually prove that much. Because in this forum we’re looking at a variety of different hardware, so what may be true for your case might be not true for any other case. What I know and can argue is this: ZFS special devs and bcachefs-metadata devices are filesystem specific optimizations where meta data can be put on an SSD. This improves all metadata operations, whether it is listing the files, finding meta data (like size) and so also file deletions, without warming up a cache or bookkeeping on which part of a disk is being used the most (hot spot cache, you’re likely talking about).
Furthermore, we’ve got some comparisons in which LVM-cache hasn’t been included to a pity supporting the recommendation of ZFS or vcachefs with meta data on SSD devices:
The last one is the easiest one: it’s a common misconception you need more RAM with ZFS. Actually, because the system isn’t clogged by data, I have more RAM left and a more responsive system after conversion to ZFS. My drives aren’t overloaded anymore too (see iostat -x
for that), so less wear and a happier wife since it’s less noisy in my hobby room now.
Furthermore, ZFS is the same amount of memory unless you’re using deduplication: https://blogs.oracle.com/solaris/post/does-zfs-really-use-more-ram
I for myself even mirror my boot drive. And if I had only one drive attached to my system, I would consider not to mirror the special devs. But since I have 50TB attached to one PC with over 10 drives, I don’t want to accept the risk of the failure of that single SSD having all meta data on it. So I mirror the meta data.
And yes, it’s more complicated to migrate a drive to another PC. But essentially you can just move over one special dev, and then it already should be able to import the drive again. An operation, that can even be fulfilled with an USB-stick.
Within one computer you’ve got zfs send/recv working on the speed of sequential IO. Which also can be used to go from different sizes pools.
And yeah, LVM is a different layer of complexity and file system agnostic while with ZFS and bcachefs it’s included in the filesystem itself. But LVM+ext4 essentially might perform the same like L2ARC on SSD, which is much alike.
But feel free to stick by what you’re most comfortable with. Because I essentially think, that’s the biggest factor. About three months ago, I would probably have taken the same stance on ZFS as you do now till I put it in trial and turned out to be working marvellous: unknown makes unloved.