LVM Caching for better performance

Hello,

I´m thinking about to dedicate my spare SSD (1TB Netac) as a cache for my HDDs especially my storj node, to prevent high I/O wait. Does anybody have experience or done this? Wich one would be the best solution for this ( writethrough or writeback)? Will it improve the latency and thereby the performance of the node? Thanks in advance.

yes, I use LVMcache.

Maximum size is 2GB per LVM containing Storj Data, you can get away with 1GB

1TB will not work, don’t even try it unless you want kernel issues - more is not more, you will cause the host to die unless you have very high CPU spec and memory.

Leave at default - writethrough

… yes it improves performance, but it takes time - LVMcache takes a good 1-3 weeks to settle down - where I find it’s good is with the small pieces, they generally get stuck in the cache, along with all the SQLite databases, if you also have your logging redirected to your Storj volume this also benefits.

You will notice instantly that the HDD is less talking noise.

to implement.

add you SSD as a physical volume
add the SSD to the volume group used for storj logical volumes using ;

vgextend

create the logical volume to cache a storj volume

lvcreate -L 2GB -n cachenode01 ‘vgname’ ‘ssd path’ <---- really important you add ssd path

i.e lvcreate -L 2GB -n cachenode01 storjvolumegroup /dev/sdf

  • add the cache to a logical volume

lvconvert --type cache --cachevol cachenode01 storjvolumegroup/nameof-lv-of-storj-data

  • check the cache with an lvdisplay - you will have new hit rate and cache columns

  • remove cache with

lvconvert --uncache /dev/storjvolumegroup/nameof-lv-of-storj-data

— Note : Data loss will incur if you don’t know how to manage LVM - the above commands need to be made for your specific case. If in doubt you should trial with a system where data loss doesn’t matter.

CP

2 Likes

haven’t exactly use that.

the definition in other things… might not apply… but i would assume it does.

writethrough is usually when one bypasses cache and or data is directly written to storage and also kept in the cache i guess… based upon what @CutieePie says.

and

writeback is when data is written to the cache and then later will be written to the storage.

hench the name writeback.

2GB seems so low… but i must admit
but i have seen some of the same issues when trying to run big l2arc’s
its not always beneficial, due to all the overhead to manage it…

it’s also based upon block size… the manual for my swap ssd says that it needs like 3GB for 4kn and 512B being 8x so like 24GB of RAM just to manage the 1.6TB swap SSD…

and i’m sure it can be much much worse for a cache since its all small “files” or metadata.

A better solution for me seem to be to use the SSD as a general Cache for my whole sytem, maybe I´ll ditch LVM due to the group creating etc. Is it possible to use the 1TB as swap partition? So basically move the Swap from the system to an sperate drive? My swap is always full until I force it to move it to the ram by turning swap off and on again.

I do not have direct experience with LVMcache, nor any caching with Storj. But I used to use bcache for my workstation. I recall I chose it because some benchmarks stated it’s better than any other solution of this type at the time, including LVMcache.

So… Your Swap should never be full, it should be set to around 2-3 times system memory.

Full swap could mean you are simply running too much, or an application has a memory leak.

1TB swap on SSD is a bit overkill ? I’m assuming the OS is on SSD as well ? if not that would benefit from being moved onto SSD.

Do you want to post more details about your system ? It’s hard to give ideas without knowing… but as my original answer, to improve disk IO, LVMcache is one of many options you could use.

I took a look and found out my swap is just 1GB wich is pretty puny. So I’ll expand it. I’m running Openmediavault with 64GB RAM on a 500GB NVME SSD. I just used around 40GB the rest is empty. I’ll move the parttiotion for VM’s etc from the OS drive to my 1TB drive, and use the free space to increase the swap partition size to around 128GB I think this should be enough, it’s around 128x the size it has atm :smiley:

unfortunately bcache needs a “block” on the drive you want to cache as I understood. So you have to format the drive, wich is bad for later implementation, like in my case. For a new setup it seem to be a good choice.

:scream:

yeah 1GB with 64GB ram is a bit light, and if you running VM’s as well and your disk is slow it will eat that memory up.

sounds good idea.

Any logical volume is a block device. My bcache was just a logical volume inside LVM.

image

not sure how useful my swap is, but it does use a bit…
its on a dedicated ssd, less than 1/5 of main memory tho…

my swap configuration is the default of debian, i assume most of it is inactive stuff from main memory, which is why i initially installed it and had an old ssd which wasn’t really good for much else.

running 8 debian containers, and 5 VM’s + 3-5 vm’s more from time to time, but + ones aren’t 24/7 workloads, can’t say i’ve seen any amazing results moving away from running without a swap, maybe the system has been more stable and memory management is much better.

usually takes like a couple or three days for the swap size to stabilize.
i can also see that the disk has activity and the swap size changes depending on what i’m doing… so its doing something lol

oh yeah i also run ZFS, so it soaks up a lot of excess memory… if not all