Windows Node optimisation

Hello Storj operators.
I would like to share with you my latest founding’s to optimis node work.
UltraDefrag download | SourceForge.net
UltraDefrag 11 - Portable Disk Defragmenter | Official Website
this to software versions to defragmentation of nodes.
First is free and second newer is paid version.
This software make not only simple defragmentation but also MFT defragmentation.
Windows defrag not making MFT defrag.
MFT is master File Table on NTFS windows. if it is fragmented response time is much bigger.
After I optimized only MFT my 10 tb node started to work faster. Bigger node bigger the problem as seek time go to hell.

6 Likes

Hi, could You say more?
What steps did You do?
What need to be configured before first start of Ultradefrag free?
I would like to copy Your moves, to get same results for my full 8TB node,
and i will share if it helps in any ways.
All the best!

EDIT: ok thx @Vadim ! MFT optimization in process, 12,35% so far!

Hello, depend how much time you want to spend, I event didnt stoped node.
Minimum is just push optimisation, max make full optimisation.
optimization’s only will optimise MFT only, Full will make defrag and MFT optimisation.

1 Like

https://learn.microsoft.com/en-us/windows/win32/fileio/master-file-table

So, in case of volumes used for storagenode, with only small files, Windows allocates space from MFT zone first, and speeds the MFT fragmentation. What a stupid choice by MS devs! They say that you can change the space for MFT zone. How can a SNO calculate the desired MFT space for a given node, knowing the space that he will allocate for the node?
Are all the pieces of 4MB?

Just curious, You did all that defrag “Quick optimization” or just “Optimize MFT”?
Because quick optimization is as slow as full optimization, it just skips already optimized segments, usefull if You stoped, because it takes so long. I wonder if that gives anything good,

i see it takes blue segments from the beginning, and placing it at the end:

in around 24h it only did like 1,39% (after long 2-3 days analizing just counting 17 milion filles)
Im wondring is theres a point in choosing that option?

mayby just “Optimize MFT” option to do, is enough, what do You say @Vadim ?

As I understand quick optimization’s do not make MFT, so better to make full optimization’s or MFT as minimum. It reads file very long time and optimization’s is going very slow. I think it because disk is in use at same time.

as the problems are mft and defragmentation itself,

normal defrag and mft defrag.
as long as you don’t have an caching solution, analyse can be skipped, its done anyway, no need to double.

(quick)optimisation is moving around data on the drive, and will only maybe help at very full drives. and node fuck this up constantly with deletions and writes.
as long as there are 10%free on the drive, no problem.

1 Like

No, i turned storagenode off for this.
Can You share Your times, example: how much TB HDD, and how much time it took in Your nodes, that would be useful for me and others to see what time it takes, what to expect.

I did not measured exactly the time, i did it several months ago. But it took like several days depend on HDD side and how full it is.

1 Like

set to full is totaly enough, no need to stop it.
ultradefrag runs in low priority background.

That would be useless, no drive is the same fragmented, even if same model.

In your case 1. defrag will be long. but it does not matter when node is online how long it takes.
if done regulary all 3 months, maybe, its getting faster.

so should we even be defragging our node hdd’s? Or just mft defrag…?

if possible both, seek time matters. the more of data, the worse it gets.

mft will be accessed all the time. it also contains the small files in ntfs. so its MUCH.
also databases, if still on the drive, will be fragmented more and more.
one bad sign is the loading time from the dashboard going high.

mine are all on ssd and usb stick, dashboard loading circle does one round then its loaded.

see the 169408 fragmented files? thats the problem, not that they where not on the outer faster tracks of the drive.

2 Likes

Hi

I am running that tool for more than 30h now and i noticed few strange things. I am running full optimization. When the first analysis completed it said that the disk was 16% fragmented. Now after running it for 30h, it says 26,42%. Why is that ? Also the number of fragmented files increases. In the beginning they were less than 400 000, now they are over 660 000. I can’t really understand what that tool does. I am getting afraid of the final result if i leave it to run for a week or more until it finishes. Now it is at about 4,4%

Hello, it is OK because when it work it have to move parts, first it make space, then it move files there in normal way.

Thank you for the explanation. Then i will wait for it to finish. By the way is there a requirement for minimal free space? Because i have around 1%. Around 40GB free. On my other nde i have even less, around 20GB(the size of the nodes is about the same, around 2,7TB

It is too small free space to operate node. I usuals have around 60GB on small nodes and even more on bigger nodes.

1 Like

The recommendation is to have 10% as free space. 1% is too dangerous: if we introduce a bug and your node could use more space than allowed, it may stop and never start again, until you wouldn’t free up some space to allow it to start.

1 Like

isn’t 10% too much ? if you have a node that is 20TB or more ? That is more than 2 TB just laying around doing nothing … i think it is a waste of space. Personally i prefer to take the risk, compared to just loose space for years.

Ok. Now the defrag will move all files, even the ones who are not necessarsy to move. With to litte free space to do it.

I suggest to shrink the node a bit, to reach 200gb free.

Vadim and i running 20tb drives. With primocache wich reduces defragmentation before writing to the drive.
However its not a free program and requires an other ssd/ram and an additional ups if writecaching is enabled, no matter if ram or ssd cache.

There is even an diskrepancy from the disk size because 1000 or 1024 bit conversion.
Wich is a manufacturer thing.

There is filesystem space after formating. They are ~ 18.7 tb after 10% i set the node to max. 16.3 tb.
There are never 20 tb to use on a 20 tb drive.

My 12 tb node disk is set to 10 tb node data.
There are reasons for this 10% recommendation.

How much ram has the node and how much is the mft size ?

If the mft size is bigger than ram, it will swap alot on the swap partition. Thus stopping a defrag takes a while.

I recommend normal defrag and mft defrag, not sure in wich order. Or if the order matters.

Also the databases should be already on ssd./ second drive/ ntfs 4k formated reliable usb flashdrive temporarily.

1 Like

This is a pretty universal ballpark requirement for most filesystems to work efficiently: if you leave less than 10%-20% (filesystem dependent) free the performance starts suffering. It is not specific to storj, it’s specific to filesystem. (You can leave less, down to 5%, if the content is static and immutable)

The expectation is to add more storage as soon as you used up about 60-75% of existing capacity.

2 Likes