For the “memtbl” variant, probably a little more than 1 GB per TB or so. Maybe 1.3 GB/TB? The “hashtbl” variant, which is the default, has very low RAM requirements, though the more RAM the more the kernel will be able to keep the disk cache in memory.
This is what we’ve been doing, yes. (We did and do have SSDs for hashtbl metadata, but we’ve been transitioning to memtbl with this configuration).
So, here’s an interesting fact. We have had FSYNC disabled for writes on both the old piecestore and the new hashstore for a few years now! So there’s two kinds of failures:
In the case of unexpected process crash, but the kernel continues, hashstore is completely safe (as is piecestore). Hashstore is designed (both variants, memtbl and hashtbl) with crash-safety in mind, where it is expected that log files might be partially written to in the case of incomplete uploads or process death (though the hashstore will attempt to rewind if it can do so safely).
Since you specifically asked about memtbl, when a write to hash store hits a memtbl-enabled node, memtbl does not return a success until the append-only log of memtbl events is written to the disk cache.
In the case of unexpected power loss, both piecestore and hashstore may miss the latest writes, as the writes were flushed to disk cache, but the process did not wait for the disk cache to get flushed to the physical media (FSYNC).
This has not been a concern for us with the global network, because typically a bunch of nodes on different subnets don’t all go offline at the same time (though this has been a concern for some of our small Select node configurations). We expect a very small amount of piece loss in the network in general anyway.
However, this is still suboptimal of course, and so one change we’re working on to our protocol is a form of “recently lost piece amnesty”, where a node can submit to the Satellite that, perhaps due to power loss, it does not have all the pieces the Satellite thinks it has, and can sync with the Satellite - a sort of reverse garbage collection. This is a problem for piecestore too, so it’s not new due to hashstore by any means. Hashstore seems to handle this situation equivalently to piecestore.