Update on SSD wear for LVM+ext4 read cache. Been logging my SMART TBW daily. Right now I have 2 distinct period patterns:
- July-Aug: When I was migrating from ext4&ntfs to lvm with multiple sync jobs.
- Since August: (10 days). Unattended normal operation.
Total is ~25TB used out of 50TB allocated across 4 nodes/disks.
Here’s a chart of daily GBs written.
The peaks at end of July were final move operations where I moved entire nodes from cached disks to another to their final location, pre-allocated some new vhdx caches, etc. The peaks in the last few days are likely TTL deletions.
My SSD is a consumer Kingston KC3000 2TB with nothing else but the caches on it currently. It is rated for 1.6 PBW which translates to to 894GB/day for 5 years.
Everything being equal, at 130GB/day the SSD should support current cache usage for 30 years.
When full, assuming cache usage goes to 300gb/day, for 14 years.
Given all the benefits of this setup (i.e. it solved everything) I’m definitely sticking with it without worry.
Last thing, I’m currently using oversized 256GB caches (18GB/TB). From my previous post I estimate that a 128GB should be more than enough (for 4-8GB/TB). At some point later I’ll recreate caches and compare again.