Fluctuation of audit scores, recovery

:grinning_face:
I am aware of this

Then why this question about defrag of SSD?
Defrag is needed only for NTFS because of its implementation and only for hard drives, because otherwise it will inevitable affects performance.

Yes, indeed the only thing it would be good for is read cache of the MFT entries themselves, which will only result in a <15-20% overall cache hit rate at best. Even that would reset upon every re-start/re-boot. If you keep your write cache tight, like say 1GB (only endangering a 25 Gig node - lol) or 4% of whatever respectively, that’s a safe bet. Just think of the average amount of incoming data say max 8mb/s x 60 seconds… you only need a 480 Meg buffer, set that :slight_smile: It would still be helpful to prevent fragmentation.

Good stuff,
2 cents,
Julio

I’m using AI for translation, maybe that’s why, but I don’t completely understand what you’re referring to. I can’t cache the MFT separately. Primocache caches the entire drive in L1 (RAM) and L2 (SSD). I’ve been testing it for over a month. I tried 2GB L1 (RAM) and even 200+GB L1, but performance is about the same. Even with a large capacity, I don’t see any change. Performance improves a lot even with 2GB L1. IO operations decrease and disk load drops significantly.
My fragmentation issue dates back to before I started using Primocache. I read on a forum that defragmenting the MFT (Master File Table) can significantly improve performance. I began the process, and then there was a power outage. Since then, I’ve been using a high-performance UPS, and I’ve started using Primocache with a strict 60-second write delay. (This is the maximum time the disk can safely write data in the most extreme case of a power failure).
The defragmenter is set to 1%; when the disk reaches this level, it automatically runs, but only a quick defragmentation—it doesn’t touch the MFT.

1 Like

You may only ever need to defrag the MFT after significant total new file additions, for lack of mental brain power to describe this ATM I’ll give an example. The MFT placeholder entries will stay forever on a volume. eg: 50 million file entries, now delete 25 million files… MFT entries stay there, there are still 50 million, and now 25 million available. If you add another 75 million files…25+75 million files = 100 million files. There will be 100 million entries, up by 50 million. You’d probably want to defrag the MFT again, at that point.

Yes, you are correct. The only real usefulness of PrimoCache in Storj use case is as a write buffer, to prevent fragmentation. L1 useful because it’s ram. 2GB buffer could corrupt 50GB of blobs - even if you device has PLP, so UPS as you have is still a prerequisit.
Depending on how new the version you have (of PrimoCache) it may have a realtime setting for how frequently it checks on incoming data. Old versions have a selectable delay down to 10 seconds or even 1 second. ie: Primo does nothing but wait for X seconds of delay AFTER a disk has become IDLE before starting to actually cache anything. So to have it work, you’d need to let it run for a few days with the node running. Thus Storj would be imprinting on the MFT the time stamps of changed or new files (and their respective LBA block address). When your cached hdd has been idle for X delay, Storj will start to actually cache all the DELTA entries in the MFT since it’s last run time, and mirror the subsequent data files onto the cache device(s). ie: you would have to turn off the node for hours and hours while PrimoCache actually makes a cache. Turn it back on, and the backgroud caching will stop/pause, but the cache will now have improved ongoing actual cache hit rates & starts being useful.

It’s designed to let customers play games during the day, and while your computer is idle over night, it uses a low priority task to fill in the cache… that’s why you never see it work, and you get bad results. Storj never lets the disk be idle. It’s also only block cache, so it only caches the blocks where it understands that recently changed files were placed (since the last reboot). I suppose you could actually get it to cache the MFT blocks themselves, if persay you configured an active .vhdx file with a static parent image of a state-in-time MFT that could be cached by PCache on the file level, pushed active data to the child of that parent, and occasionally retraced the child into said parent. LOL - that’s pretty far fetched.

So hopefully you now understand it’s extremely limited use here. Unless you run the node for a month, turn it off for a day, or however long PCache has to take at ā€˜low priority’ to read and cache the data, it will cache nothing really. If you run PCache right from the very genesis of a new node - lol, or copy/sync the entirety of a node to the contents of a cached drive, then let it toil away for another several days caching everything on that drive - woot 100% hit rate. And stop the node for caching to catch-up every day. Obviously this is not practical.

V-locity or CASS are examples of products I indicated earlier that are better suited for a Storj use case.
If you want to purposely push an MFT into OS cache, you can disk defragmenter analyze it, or execute something that can actually follow and load it, like the search engine called ā€˜Everything’, etc. Even then, over time it much of it would be pushed into virtual ram, as it’s considered ā€˜inactive’ ram in it’s general state over time, then dropped entirely.

Good luck,
2.5 cents,

Julio

1 Like