you are basically describing a cache, the exact point of a cache is to load often used data and to buffer writes to reduce the load on the slower devices.
some storage solutions have cache options, stuff like zfs uses memory and optionally a ssd for read cache, ofc writes are also stored in memory, but it’s in a different part and isn’t really part of the cache.
synology NAS devices have the option for using an ssd for cache on raid, not sure if that works for individual disks.
else there are most likely options with lvm for adding a cache device to a slower storage media.
and if there isn’t, there does exist other software for both linux and windows which gives the ability to run a cache.
that being said, cache is a fairly heavy workload, you will most likely need to use enterprise grade ssd’s or atleast consumer ssd’s with the highest level of wear endurance.
there is a reason memory is most often used for cache.
your hdd will also have a cache, and ofc more is better… but a 20TB as you used as an example will usually have like 512MB cache if not more.
the latency improvement from running with a cache isn’t that amazing tho… sure it does speed things up by a lot from a local perspective…
but stuff is uploaded and downloaded over the internet, so that will add tens of milliseconds to the latency no matter what… while a hdd is able to seek in like 2.7 milliseconds for a 7200 RPM disk… maybe i bit less… and then a 5400RPM disk would be like 4 MS seek, avg…
ofc as workloads instead this number can go up radically.
and ssd is down in like nano seconds seek, so really from a customer perspective they won’t feel a huge difference.
ofc cache will save a ton of io load from the hdd, which is always nice… for tons of reasons.
but that is a storage solution thing, doesn’t really have anything to do with the storagenode software.