The idea is make possible to nodes to cash some frequently reeded files of node on fast drives possible.
This will make hot files possible to read very fast from nodes.
Also node will make less seek operations and make faster response on writes.
Like every HDD have cash for better performance.
It shold be possible to give directory and size on it. For example My servers have NVME that are not used and only 200-500GB it not logical to use as node, but as buffer will be perfect.
may be it is possible somehow to make it by Hardware ?
This is only meaningful for a sequential read of a very large files located on the outer tracks of your drive. When random access is involved (e.g. small files, especially scattered across the disk) it is not unusual to see as low as 0.01MBps top throughput.
The reason is — seek time is not zero.
Hence, caching is employed to offload random IO from disks, to minimize time disks move heads and wait for the sector to fly by and maximize time disks read and write data.
OK now after some day I have some statistiks. About use of cache.
I made on 8x 4TB nodes cach NVME 1 TB shared for read/write.
this is statistics about in weekend started on friday, first 24h filewalker made hdds working 100%
I use it differently. Ram->ssd->hdd write. ssd->hdd write. Read speed is not problem becouse it not high in storj. Next step, you want speed up write with this soft, and corrupt or lost some files and disqual.
P.S. nvme for cache is very expensive. This is read data, ssd from aliexpress is best
I dont use RAM, so i disabled L1 cache at all. Because possible power cut will kill All data in L1.(mai be this hapened with your node?)
My main purpose is speed up reeds, from cache, by this way, I also load down HDD for writes.
As you see on my post, 15%-65% reeds made from cache. this makes more time to write normally.
After your post, i made cache read only, this will protects from writing problem, speed up reading.
and make more time for faster writing, because it not read same files several time from HDD.
RAM is too small to make it effective, i used 1 TB NVME.
Interesting, to change config yesterday i made restart, file runner on all HDD was completed in 6h wile i was sleeping, last restart took 24h to complete. After restart cache ssd was reading 120 MByte/S and other HDD was on 4-8% of load. So looks like cache also holding all file structure and it helps make all faster.
If you need to do that a “real” fast way you need to get out metadata of files on nvme or ssd array. It can be done with zfs special device.
Or you can simple add