Does someone tried to use optane memory to boost performance? Lot of consumer motherboards are only suport up to 64GB ram, it you have several nodes even 1 node with 4tb already has 42GB of metadata(mft) so optane shold help to hold more. it has about 60X responce speed faster than nvme.
Where are you getting this 60X figure from? From general searching comparing to something like 980 Pro, optane at 4k rand read seems to be 4-6X faster latency using AS SSD.
If the optane drives are limited to 64GB you can’t hold more than 1 node
Not optane but im using a nvme (samsung 900) as cache for ext4 (lvm2 in linux) the filewalker is down to a few hour in a 10TB node, im using something like 100GB cache for 16-18TB HDD
With cache there is huge improvement in filewalker, i suspect there is less power cosumption and less heat inside the case. With the cache being hit for filewalker instead the hdd i think there will be better life expectation for the mechanical drives, the hdd is not at 100% for days and days
i found in datashed that optane latecy is read is 8us but there is no data about latecy in datasheet in 980 pro only found it here
In 4K random read, the Samsung 980 Pro 2TB came in right behind the 1TB version with a peak performance of 520,650 IOPS and a latency of 244µs.
Samsung 980 PRO SSD Review (2TB) - StorageReview.com
so 8us vs 244 us is very big difference.
optane latency is crazy small, the problem is the size of optane drives being too much small for more than just one node
what are you using for cache in windows for multiple drives? im trying to help one folk with 20 windows VM and 1 node each VM, Can i open one thread and mention you for helping him?
in windows for cache you can use only lot of RAM or Primo Cache software, but it is paid solution.
im usinga slightly better setup, debian with ext4 + lvm cache
I more think use it like optane memory not like nvme, windows use it for page files, as extension of RAM
Would steering away from Windows also be a possibility:
- Less memory use in the first place
- Zram-tools and zswap-cache
- LVM+cache or ZFS as possibilities to speed up the whole thing.
This all might also be possible in WSL, although it costs you the overhead for an additional kernel and supervisor.
I doubt whether optane is really increasing the performance of storage nodes, especially since random access is completely random. Besides, it’s not actually RAM. It’s more like a read and write cache for a drive, actually only your primary disk.
Primocache won’t help with filewalkers and mass deletions, on the other hand Tiered Storage does help significantly Ways of speeding up filewalker on NTFS - #31 by xsys
yeah I agree with this. so far storj node performance has benefited most from having metadata stored permantly, or at least cached, on a SSD. Problem is there is a lot of metadata. I’ve been allocating 5GB per 1TB of node space and it’s been enough for ZFS L2ARC purposes, but still, that means a 112GB optane drive could cache… maybe 25TB worth max of storj data.
And more importantly, SSD performance for metadata has been fast enough for everyone. I have a single old enterprise MLC SAS SSD, and it has been able to cache 8 nodes so far without maxing out on performance. Between the network being slow, and the hard drive itself being slow, the extra latency improvement of optane doesn’t seem like it would matter.
Primocache is able to use an SSD as a cache too, as far as I know.
If so, then it probably can help. But seems too expensive for test?
yes, little bit, that’s why I asked, may be someone has experience.
yes it is, as well as a piece of dedicated RAM. In both cases it will store some files (like 0.00000001% of all the files) in the cache, but it will not store metadata / file table, so no speedup filewalker that needs to read all millions of files from the HDD itself.
Whereas Storage space does. I tested both on the same hardware, Primo = blazing fast file write, slow random file read (used space filewalker 7 hours). Storagespaces - fast both (used space filewalker 4 minutes).
(bit offtopic here, mayber better to move to Ways of speeding up filewalker on NTFS)
But optane and RAM should hold metadata, but on other hand it can be max 112GB it not very much. I defragmented one of my disk reasently, and 4tb had 42 GB of mft(metadata in windows)
I usualy have about 16 nodes on 1 server, so i need a lot.