Maximise RAM use

Dear all
The old ReadyNAS I’m using to run 4 of my nodes has 48GB of ECC RAM.
I was wondering if there is any way that I can put that much RAM to better use, possibly somehow increasing node performance?
I wouldn’t want to play with extreme things like RAM disks for databases as I think the risk of data loss or corruption is unacceptably high. Caching data is also unlikely to be of huge help as I suspect access to data blobs is pretty much random. Are there another “tweaks” you might recommend?

Thank you in advance for your suggestions :slight_smile:

1 Like

Don’t worry about that and let the kernel take care of it in form of buffers and cache. I estimate that for a regular ext4 file system for every 1TB of stored node data, metadata take a gigabyte in cache. Meaning, your 48 GB of RAM will be just right when your nodes hit about 48 TB of stored data. If you follow my suggestion to tune down the size of an inode, the number doubles.

And if you trust your UPS, tools like nosync will indeed help.

1 Like

Why? You will offload that IO form disks, it’s a win. data in ram does not corrupt, especially with ECC. If you reboot databases vanish - no big deal, that data is useless anyway. I would definitely do that lacking better option.

Caching in ram that works on a block level is extremely helpful. Once all metadata is cached — you eliminate all random IO associated with that from disks. Running filewalker on node start accomplishes this as a side effect.

Totally agree with @Toyoo ’s advice above, and you can search for other recommendations like turning off access time updates and synchronous writes.

On the other hand, if you feel adventurous — replace whatever OS is there with something that suppprs ZFS — TrueNAS or I’ve heard unraid today added support as well. This will open a few more avenues for performance improvement, including ARC instead of a simple LRU — this make be the best use of your 48GB, and special devices — this will further offload random IO by keeping both metadata and small files on an SSD, even before the ram cache kicks in. And then you can still chose to add second layer cache to squeeze final drops of performance if you wanted to.

2 Likes

I archieve around 31% hitrate with 23GB m2ssd reading cache for the blobs. it helps to win upload races…

have you an UPS?

if yes, an small writing cache for the databases and blobs is also nice. some 2 or three digit MB will be enough

Oh, that’s interesting. Thank you :slight_smile:

Yes, I have two UPSs as my nodes are running on a ReadyNAS with two redundant PSUs. Each PSU has its own UPS.

Unfortunately the OS is fairly proprietary (although running on Debian Jessie) so I’m not sure how I would go about doing that.