LVM Caching for better performance

Hello,

I´m thinking about to dedicate my spare SSD (1TB Netac) as a cache for my HDDs especially my storj node, to prevent high I/O wait. Does anybody have experience or done this? Wich one would be the best solution for this ( writethrough or writeback)? Will it improve the latency and thereby the performance of the node? Thanks in advance.

1 Like

haven’t exactly use that.

the definition in other things… might not apply… but i would assume it does.

writethrough is usually when one bypasses cache and or data is directly written to storage and also kept in the cache i guess… based upon what @CutieePie says.

and

writeback is when data is written to the cache and then later will be written to the storage.

hench the name writeback.

2GB seems so low… but i must admit
but i have seen some of the same issues when trying to run big l2arc’s
its not always beneficial, due to all the overhead to manage it…

it’s also based upon block size… the manual for my swap ssd says that it needs like 3GB for 4kn and 512B being 8x so like 24GB of RAM just to manage the 1.6TB swap SSD…

and i’m sure it can be much much worse for a cache since its all small “files” or metadata.

A better solution for me seem to be to use the SSD as a general Cache for my whole sytem, maybe I´ll ditch LVM due to the group creating etc. Is it possible to use the 1TB as swap partition? So basically move the Swap from the system to an sperate drive? My swap is always full until I force it to move it to the ram by turning swap off and on again.

I do not have direct experience with LVMcache, nor any caching with Storj. But I used to use bcache for my workstation. I recall I chose it because some benchmarks stated it’s better than any other solution of this type at the time, including LVMcache.

I took a look and found out my swap is just 1GB wich is pretty puny. So I’ll expand it. I’m running Openmediavault with 64GB RAM on a 500GB NVME SSD. I just used around 40GB the rest is empty. I’ll move the parttiotion for VM’s etc from the OS drive to my 1TB drive, and use the free space to increase the swap partition size to around 128GB I think this should be enough, it’s around 128x the size it has atm :smiley:

unfortunately bcache needs a “block” on the drive you want to cache as I understood. So you have to format the drive, wich is bad for later implementation, like in my case. For a new setup it seem to be a good choice.

Any logical volume is a block device. My bcache was just a logical volume inside LVM.

image

not sure how useful my swap is, but it does use a bit…
its on a dedicated ssd, less than 1/5 of main memory tho…

my swap configuration is the default of debian, i assume most of it is inactive stuff from main memory, which is why i initially installed it and had an old ssd which wasn’t really good for much else.

running 8 debian containers, and 5 VM’s + 3-5 vm’s more from time to time, but + ones aren’t 24/7 workloads, can’t say i’ve seen any amazing results moving away from running without a swap, maybe the system has been more stable and memory management is much better.

usually takes like a couple or three days for the swap size to stabilize.
i can also see that the disk has activity and the swap size changes depending on what i’m doing… so its doing something lol

oh yeah i also run ZFS, so it soaks up a lot of excess memory… if not all