Whats the better hardware configuration hdd/nvme?

Out of curiosity, why?

Before a piece of memory gets paged out, first disk cache is attempted to be evicted to free up some space. This already should not be allowed to happen as at this point performance plummets: it does not matter if you page in memory pages faster if every access involves extra disk IO because the cache was evicted. So it’s fail right here.

But continuing, if disk cache eviction did not provide enough free ram — memory gets compressed in-place. And only then, if the system is still starving, are compressed pages offloaded to disk. This system ar this point is in agony — making a bed slightly softer does not help anything.

Page file has its legitimate uses — but none of those are applicable to the device that runs storage node. (Things like paging off everything possible before hibernating (S4 sleep), to accelerate wake by needing to restore less data)

Ok, I knew there will be nice arguments to be made here :stuck_out_tongue:

My systems are usually configured to do some small level of swappiness. This is because I found that in long-running systems, there’s usually hundreds of megabytes (if not gigabytes—depends on software running on the system) of data in memory that’s used only very rarely. Like, maybe a dhcp client waking up once a week. Or exim, which I run for the once-in-a-blue-moon alert. Or, for some reason, containerd-shim—I run dozens of Docker containers, each shim takes few megs and does basically nothing unless I actually change the state of the container it manages. On my NAS, I literaly see about a gigabyte of data that is only rarely paged in. Allowing these pages live on swap frees up space for buffers and caches. As a result, resident process memory is 22% of RAM, the rest is pretty much buffers and cache, while kswapd0 sits quietly in the corner. No extensive swapping involved.

So far the only systems where I do not set up caches are cloud ones without local storage—swap on a network device is not a good idea.

Kernel is pretty smart about that. It won’t evict a cached page if it’s popular. As long as you have enough RAM for the popular pages to stay in memory this way, kernel won’t try to evict them.

I do not have anything like zram/zswap installed. The box’s CPU is too weak for that ^^

1 Like

lol i knew it was a loaded question, I bit on purpose :upside_down_face:

Yes, I agree, those rarely-used-ram-occupying non-wired pages will eventually end up in swap — because as you said, they are rarely used.

But by the same logic however, the performance of paging them back in is not important: they are rarely used! Therefore that swap file can sit on a slow storage, and wasting SSD for it is pointless.

I’m talking about disk cache, not stale ram pages. It gets purged instantly, once more memory is needed. Purging cache to void is faster than paging off memory to disk. It is using unused ram by design.

OPs machine was windows. It compresses unused pages. Synology does the same IIRC. But in your case — yes, you have even less wiggle room. Add more ram, don’t let it get to the point when disk cache is gone, let alone when active dataset spills into page file.

My point is that if you finding yourself optimizing media performance for the swap file — something went wrong few steps before. At the very least, in the context of hosting storagenode.

1 Like

That depends on the vm.swappiness parameter.

Right! I keep wondering where do people find the money needed for these server licences…

1 Like

A littel update about my setup. The node is running fine.

I was a bit upset over the unused ssd space, and would not set up an 3. node (yet) so i bought primocache.

changed the volumes to 2 ntfs parts, one 500GB big. and converted it with primocache to L2 buffer partition. and after that i reenabled 3% overprovisioning (to mock arrogantrabbit :face_with_peeking_eye: )
I set the read-only buffer mode and…
… then i realised for write caching wich goes ALWAS PARTLY over RAM in primocache !!! i need an UPS(USV) officialy. (Vadim does it without protection :rofl:)

So ibought an UPS( @arrogantrabbit yes you were right in the late run over that)
Cheapest Blue walker fits even in the narrow space

and some more RAM despite not realy neccesary.

and forgot the mobo compatibility, so its ddr4 2400 instead of 3200 but did not notice a negative difference. 47 of 64GB are now in use.

(i will never financially recover from this :sob: maybe in 20 months more life of my first node)

So now i can go full full metal cache it. with different cache paths for read-over-nvme and write-over-RAM.
Even my system disk is now included. It indeed speeds up the gaming loading times.

also for the databases (again arrogantrabit was right in the end, despite i would not have him that)

1 Like

16 posts were split to a new topic: Windows licenses