Windows Node optimisation

I know these “best practices”. However, i have been working the same way for years, more than 20. And i have never had issues. For example, both of my nodes have been running like that for years, without any issues. I started out with a single 1TB disk. Pretty much i am using old HW, that i have around me. I don’t want to invest.

I have an old workstation z800, that i plat to turn into a storjnode. Making it around 20TB. There i might consider some of these so called “best practices”. But i would never spare 2TB . Maybe 200GB, but we will see. It will take me years to fill in 15+TB anyway :slight_smile:

Yes, everyone knows that.

you probably meant 10 TB ?

i don’t know how much RAM is using the node. The workstation has 12GB at the moment. I think it is more than enough. :slight_smile:
The MFT is 8,57GB

how can it use more space, than it is allowed? And if does so … then what is the difference ? If i had 10% free space, then it would just eat them for more time, but that bug would eventually eat all of them, no matter how much free space you have :slight_smile:

I don’t want to lecture you, just state the obvious.

Yes. My bad.

This should be enough to defrag the mft.

Wish you luck.

Thats pretty bad numbers and is propably related to the less free space. :man_shrugging:

You may notice it before it’s too late and we likely will be able to fix a bug, it’s would buy some time.
But as I said, it’s still a recommendation, however I feel like 1% is too small for a normal operation.

At the moment it is 4%, above 120GB

I am running the defrag tool for almost 4 days and it is still below 14% done. Why is it so slow ? At this rate i am not sure if it will complete by the end of the year. :slight_smile:

And as i stated previously in this thread, the % Fragmentation increases, instead of decreasing. This is also something i can not understand. When I tried it few days ago, like i wrote above, it was less than 16% and now as you can see it is above 30 %. Can anyone explain why and how exactly this tool works ?

you have too litle free space, so it try to move things part by part, this increas fragmentation, but as it finish it will go down.

But those files are relatively small. Around 2MB each, right ? I have almost 130GB free space. That’s about 130 000MB free space. That is a lot of space for such small files. Isn’t the purpose of defragmentation to collect all the pieces of one file and to combine them together somewhere on the free space area. Or I am missing valuable knowledge on this topic ?

No, some kb are the most files. The recommendation is 10% free space. For some Reasons, as stated before.
I suggest you set he node to free some space, through the yaml, and set it to 2.4TB, overused will appear and ingress will be stopped. that speeds things up also.

can take verry long under this circumstances. let it run, or make free space while running.
or stop it , reach 10%free and start then again.

also take care of the E drive, its pretty full.
Compare it with a open bucket of acid, you have to carry it by hand, do you fill it to the brim or maybe 10% less?

If bucket stands still, you can fill to the brim, but if you have to move it, then what do you do?

i have set it to 2TB 1 or 2 weeks or. It drops very slowly.

I don’t plan to move it :slight_smile:

1 Like

Since i use these two together, here i found an article about Primocache(30day test possible) and Ultradefrag (free use possible).

I bought both, the performance is unbeleavable, even the M2 nvme with the db and orders has ram cache now. The complete node, including all DBs, works now in RAM first, and after 10s data is writen to the drives. Reading of Node data is cached by NVME (500gb for a 20TB drive), Write by RAM.

NVME is cached by RAM (1-3GB is enough)

(As a bonus i cached my “slow” System SATA ssd with 12GB RAM )

This helps alot to win races, but also reduces wear leveling on the SSD, and prevents fragmentation on the HDD.

Verry usefull for SNO with native windows node and already some SSD space left over (50 to 500GB.) will be ok.
i estimated 25GB read-cache for 1 TB node data.

Notice that for write caching regardles of medium (ram or ssd or nvme) an UPS is recommendet to prevent data loss.

Read cache alone can be done without UPS.

I installed more RAM, (32 upgraded to 64GB /19GB free atm.) and despite it runs at a lower frequency now, its fast as f**k.

Newest Stats of my nodes:
1.node DB on USB,
2.node 1GB RAM Writecache(Primocache)
all nicely defragmanted
the 2. node gets around 80-100GB / Month more ingress than 1. (No Neighbors)

Also i made a crappy jpg to visualise the cache structure on the 2. node