[Solved] Win10 20GB Ram Usage

It’s not inverted, you are enabling “disableLastAccess” feature by setting it to 1.

I don’t think it’s per drive. I think it’s a global flag.

Okay, give me a sec to try it out

image
What are deferred blocks?

BTW as a datapoint, on my node on freebsd with ZFS filesystem, 4-disk raidz1, and a special device (all metadata goes to SSD), today I was seeing over 1200 IOPS hitting the SSD, just for metadata, and only 60 IOPS were reaching disks.

There is no way HDD can support over a thousand of IOPS, hence, use of some sort of caching is unavoidable.

special device

hdd


Best I can figure out atm

So the green one is how many milliseconds does the disk takes for read. It’s not very easy to interpret :slight_smile: There must be a separate counter for IOPS.

Yea I don’t think there is one. I’m also looking at primocache and I might be able to finangle it by dedicating RAM specifically to just the problematic drive and leave a smaller amount of OS managed to the other drives since they seem to manage just fine.

I also did the access time but it seems to only half applied. There’s certain files that keep being updated on the F drive but nowhere else (not other drives or system drive) is getting new access times

You can gain a little more IOPS by disabling 8.3 names:

Huh, did not know this. Will add to my virtualized node guide, thanks :slight_smile:

Last access time seems to be disabled by default for windows server but enabled for consumer editions.

Also

https://www.cpubenchmark.net/cpu.php?cpu=AMD+A10-7860K&id=2722

only 3277 points, thats not the fastest CPU even for 1 node, and You have how many?
i see 2 storagenode.exe services at least

Any CPU is fine; the only concern would be if node needed any hashing/encryption (I don’t know if it does), but pretty much any modern cpu has hardware accelerators for cryptography. See “Data Encryption 910.1 MBytes/Sec” at your link.

Be fully aware that without redundancy/ups/etc, any caching can corrupt your file system. Think of the 4% disqualification rule, and don’t just crank up a L1 or L2 write cache, etc. Check the size of your MFT, at least read cache more than that size. Primo can’t f# up a read cache, even on a mismatched re-boot scenario, but a write cache … use caution.

1 Like

It’s the best cpu the motherboard can handle :sweat_smile: and even then, I’m sitting on ~35% CPU usage while remoting into it so it’s more than capable of handling reads/writes.

1 Like

Will try this as well - thanks!

So PrimoCache seems to have done the trick! It looks like even setting a 2 second deferred write is giving it enough time to collect big enough blocks to flush to disk more efficiently. I also started playing around with the read cache at the same time.


(I have 32gb total ram, so giving up 20gb seems fine to me)

image
I can only really supports 150mbps from my network atm so 1GB write cache should be plenty, even if it’s filled up, the OS has 12GB ram leftover + swap file.

This does lead to the following cache hits:
3TB node (full): 26.79%
8TB node (almost full): 3.80%
16TB node (problematic child): 4.62%

So all in all, the read cache seems to save a bit of IOps and the defer write was able to go overnight no issues:
image

I’ll probably mark this as as solved, thanks a bunch for your advice @arrogantrabbit and everyone else’s comments!

3 Likes

I am 100% aware of the issue, but I’m in a relatively stable part of the world so I don’t expect power outages. Even then, with a 2 second defer I should be able to take several such crashes on the teeth before starting to get DQ… hopefully :sweat_smile: If I notice the score dropping I will of course have to look into a UPS or disabling defer-writes

1 Like

no you can make free spache by resizing a partition in windows, and use free space as a cache.

This is interesting result. Node now uses less than a 1GB,

but you used 20GB RAM for cache,

and how is it different from the initial state?

You might also enable cache writes on the disk too via properties without use of a Primocache…

It’s different because it’s steady → before the ram would keep growing until my device goes 99% then the node stops getting requests, network usage drops as the sats stop sending data and the nodes start catching up a bit (ram starts going down) then rinse and repeat.

Now the memory is pegged at a steady rate. I could most likely reduce from 20gb to just 2-4gb if I didn’t want the read cache for it

1 Like