Can I decrease the 10% disk space overhead to a lower value?

Asking this because besides the DB all disk space seems to be correctly measured (including garbage) so it won’t go over 100% according to the dashboard. Is it safe to decrease the disk space overhead to something like 5% or even less ?

Are you taking in to account temp folder, redirected log file (if applicable), order files ?

You have to weigh in the profit of that 5% over losing entire node. There could be a future update that takes up more than 5% of that overhead.

Well I migrated a node from an array to a single disk and it filled the 8TB HDD by 99% :smiley:
Is it advisable to do so? Definitely not… Some bug in the storagenode might result in a higher usage and then your node will crash and you might lose it.
Additionally, the fuller the HDD, the more difficult it gets for the OS to find enough space for new files, resulting in heavily fragmented files, which results in higher IOps needed to store and read files. This can make the HDD slow, RAM usage higher, etc.

Now personally I’d say the percentage depends on the size of the drive. 10% on a 500GB drive, sure makes sense. 10% from an 8TB drive? No, thanks… I wouldn’t leave 800GB free just in case. Maybe 400GB which would be 5%.

5 Likes

Why would the node get disqualified?
From what I understood, the node stops receiving ingress when it’s only 500MB away from the allocated space or if the drive has less than 500MB available.

I still would leave some leeway, but I must admit that from 1 year of experience as an SNO, it feels like leaving something 2~3% would be enough.
I’m not advising this though! I’m not doing it myself, and it would really not leave a lot of free space for… all the things @kevink stated:

So yeah, totally agree with them :+1::

Yes but there is a 3 min delay till satellite acknowledges it. During these 3 minutes your node still keeps getting data. I have seen this 500MB go as low as ~227MB before ingress stops.

As many SNOs just setup and forget, the general advice is to keep 10% but as some SNOs are familiar with the workings of the node they can decrease that 10% at their own risk.

1 Like

kevink basically hit the nail on the head

i would tho add that if you end up getting 100% it might just be game over instantly…

so it ends up being a risk calculation, performance will decrease on hdd’s as the fill up tho… loosing like 50% of performance from empty to nearly full… but nearly full becomes terrible…

not sure i would allocated 10% for free space of my 30TB node… but it’s shared with other stuff… so i need the free space for performance to be good, and just for having general free usable space for people.

so the free space isn’t really wasted, but that’s also on purpose. ofc that’s not the case in the more regular setup… of one node pr hdd

It should not be disqualified unless data is lost.
There is another one point - if your node would be unable to store orders on the disk due to low free space - you will run your node basically for free.

1 Like