Will it be safe to allocate more space to this node based on my disk utility info presented to me and here

sudo df -H
Size 984G

that mean
984G
– 0G Os if the Os runs on sda1
– 0G all other stuff on the sda1
= 984G - 10% = 885,6G for Storj to use

10%free mean free and not that the OS or something else is there because then they are not free

Yeah, that’s a quirk of the file system. So this may be cutting it a little close. On the other hand, it’s a relatively small node, so I don’t expect a lot of trash buildup. Due to some weirdness in Storj the first step down would be assigning 800GB, which is fairly significantly less. But a whole lot safer. So the official advise would be to drop to that. It’s up to you if you want to take a bit more risk though.

I’d like to preface this next part with: “do as I say, not as I do”
I personally have a 2TB HDD on which I have assigned 1.9TB to Storj. This node is now full and has left me with about 60GB free. That’s risky and you seem to have even less. It’s probably not a good idea, but I’m ok taking a bit of a risk as this is not my only node. I keep a close eye on it and have alerting set up in case the free space drops further. In general it is definitely better to follow guidelines. So yeah, don’t copy my mistakes unless you are willing to risk your node.

woow… :slight_smile: wait a minute… :slight_smile:
I’ve continued this discussion, just because you’ve said, that for 1TB node alocated 0.9TB of space is absolutely perfect and it still meets a need free 10% of space. But i had a different opinion and I’ve set my node to 0.9TB, just to SHOW you, that you will NOT be able to have 10% of needed space :slight_smile:

So you’ve changed your mind? :slight_smile:

I would say I refined it. I still stand by that, but if you use a file system that reserves room that’s obviously going to change things.

Look at the numbers you posted:

903+31!=984
I would consider 984 close enough to 1000 to set 900 as the node limit. But if you’re using a file system that reserves 5%, you may want to lower it. This is a different issue from the 931G you kept quoting earlier, which is the difference between binary and decimal notation.

The reserved filesystem space can be adjusted. Here’s a random tutorial I found on how to do that:

But, be aware that if the filesystem is 100% filled, the OS will likely not function very well. The default reserved space can certainly be changed from 5% to 1% for a non-root / non-home filesystem… but I wouldn’t go any lower than 1% reserved space…

1 Like

what way are we suppose to calculate from… i mean 100 - 10% is 90 and 90+10% is 99
so depending on what way one does the math there could be atleast 1% saved even while essentially still following a 10% free space paradigm…

in reality it comes down to a few factors… how much space could be required by the storagenode… because running out of space could be very detrimental if not deadly…

if 2% capacity saved kills an avg node in 2 years would you still want that extra 2%… what if it was 3% and it was 10 month life for an avg node…

its a matter of risk vs reward… you can always move closer… you might not be able to go back tho… i duno what happens if a node runs out of room while working… i mean need to do internal work… will it simply wait until something has been deleted and then do the work…

or will it crash and burn

This difference really doesn’t matter. You need to ensure you have plenty of additional space as a buffer. 9% would be plenty too. 5%, might be risky. 10% is nice and round.

Closer to crash and burn… Since it would be unable to write to databases it’s going to have errors all over the place. I’m not sure if audits would still be able to succeed. I think so, but at this point your node might even stop with a fatal error, database corruption is a risk etc. You really don’t want to run out of physical space.

@BrightSilence yeah crashing and burning is very likely… ofc if there are failsafes in place in the program, it shouldn’t go all crazy… but takes a lot of testing for such features to be 100% effective for all issues…

i suppose one could simply add a script to shutdown the node in case of that happening, if one really wants to run that close to the edge…

kinda what we already have been setting up with the log and audit fail shutdowns…ofc doubt ill run into that issues… so … why bother :smiley:

but the difference between 9 and 10% can be quite large if the profits margins are tight… if one is making 10% profit when all overhead and expenses is subtracted… 1% more capacity can be a 10% increase in profits going from 10 to 11% :smiley:

@node1 just get more drives seems like a much better way forward rather than trying to nitpick 1 or 2% savings on a 1tb drive…

1000% of 0 is still 0 :smiley: