I´m thinking about to dedicate my spare SSD (1TB Netac) as a cache for my HDDs especially my storj node, to prevent high I/O wait. Does anybody have experience or done this? Wich one would be the best solution for this ( writethrough or writeback)? Will it improve the latency and thereby the performance of the node? Thanks in advance.
Maximum size is 2GB per LVM containing Storj Data, you can get away with 1GB
1TB will not work, don’t even try it unless you want kernel issues - more is not more, you will cause the host to die unless you have very high CPU spec and memory.
Leave at default - writethrough
… yes it improves performance, but it takes time - LVMcache takes a good 1-3 weeks to settle down - where I find it’s good is with the small pieces, they generally get stuck in the cache, along with all the SQLite databases, if you also have your logging redirected to your Storj volume this also benefits.
You will notice instantly that the HDD is less talking noise.
add you SSD as a physical volume
add the SSD to the volume group used for storj logical volumes using ;
create the logical volume to cache a storj volume
lvcreate -L 2GB -n cachenode01 ‘vgname’ ‘ssd path’ <---- really important you add ssd path
— Note : Data loss will incur if you don’t know how to manage LVM - the above commands need to be made for your specific case. If in doubt you should trial with a system where data loss doesn’t matter.
A better solution for me seem to be to use the SSD as a general Cache for my whole sytem, maybe I´ll ditch LVM due to the group creating etc. Is it possible to use the 1TB as swap partition? So basically move the Swap from the system to an sperate drive? My swap is always full until I force it to move it to the ram by turning swap off and on again.
I do not have direct experience with LVMcache, nor any caching with Storj. But I used to use bcache for my workstation. I recall I chose it because some benchmarks stated it’s better than any other solution of this type at the time, including LVMcache.
I took a look and found out my swap is just 1GB wich is pretty puny. So I’ll expand it. I’m running Openmediavault with 64GB RAM on a 500GB NVME SSD. I just used around 40GB the rest is empty. I’ll move the parttiotion for VM’s etc from the OS drive to my 1TB drive, and use the free space to increase the swap partition size to around 128GB I think this should be enough, it’s around 128x the size it has atm
unfortunately bcache needs a “block” on the drive you want to cache as I understood. So you have to format the drive, wich is bad for later implementation, like in my case. For a new setup it seem to be a good choice.
not sure how useful my swap is, but it does use a bit…
its on a dedicated ssd, less than 1/5 of main memory tho…
my swap configuration is the default of debian, i assume most of it is inactive stuff from main memory, which is why i initially installed it and had an old ssd which wasn’t really good for much else.
running 8 debian containers, and 5 VM’s + 3-5 vm’s more from time to time, but + ones aren’t 24/7 workloads, can’t say i’ve seen any amazing results moving away from running without a swap, maybe the system has been more stable and memory management is much better.
usually takes like a couple or three days for the swap size to stabilize.
i can also see that the disk has activity and the swap size changes depending on what i’m doing… so its doing something lol
oh yeah i also run ZFS, so it soaks up a lot of excess memory… if not all