10% overhead specification

I set up my two nodes with the 10% overhead calculated on the real space.
So, my 3TB HDD has 2.7TB of real space and I allocated 2.7*0.9 ~= 2.4TB of space to the node. Similarly, I allocated 1.6TB for the 2TB (1.8TB real) node.

Now that one node is full I see that it has (a lot of) free space, so maybe I interpreted it wrong: is the 10% applied on the theoritical space? So, if I know that my HDD has 2.75TB of real free space I can set up the node to use 2.7TB, if it’s all free?

You might be able to go up to 2.5TB or 2.6TB but I would not go higher than that. The overhead is for the database files. Particularly during database migrations, you might need some extra free space to hold temporary data. Trying to cut it as close as possible is a recipe for hosing your database at some point in the future.

1 Like

True, I didn’t think about the database files. I could maybe increase it little by little and monitor the database files size but the database migrations would be a nasty variable to work with.

Thank you for the reply!

1 Like

No problem. I’ve done the same, increasing my storagenode capacity little by little. Instead of leaving 10%, I’m leaving 50-100GB free space. I don’t expect that the database files are going to grow larger than 50GB, at least not quickly. (I also have monitoring software on my nodes so I get alerts when free disk space falls below a threshold.)

1 Like

This is not only for databases. We are still in the Beta, the software could have bugs and node can overuse the allocated space (as was in the UPDATE: the SNO Garbage Collection & Disqualification bug has been fixed for example)

1 Like

Increasing that will not have a huge impact on your profit, if you increase 2.4T to 2.5T or. 2.6T is will take some days or weeks, and they you will be full again, and you will risk your entire node. is it worth it? It does not make sense.
Add another node instead and leave the one you have

Your 3TB hdd has 2.7TiB of space, which most OS’s unhelpfully display with the wrong unit. You will probably be fine assigning 2.7TB.

2 Likes

Hi Everyone,
New to Storj and an average user of linux. So, sorry if this is a noob question. I used df -h and df -H, and they both give different sizes. I have a 8TB drive, and I got 7.2T (df -h) and 7.9T (df -H) of available storage. So, which one should I use to calculate my 10% overhead?
Thanks for the help

Hello @KillahGoose,
Welcome to the forum!

Any of them. The storagenode acknowledge both units - TiB (-h, binary) or TB (-H, decimal).
All Storj software uses a SI notation by default, i.e. - decimal units

2 Likes

Thank you! Thanks for making it clear.

Node is online!

1 Like

Sorry for adding to old topic, but imho it’s better to ask here than duplicate it.

How about “big” disks?
10% of 8TB (df -H) is 800GB, it’s quite big space. Could I maybe go for 400GB (or even 200GB) of free space, so allocate 7.6TB or 7.8TB?

1 Like

10% is recomended. Other values is your resposibility and risk. I used 3.9 from 4 and all work fine.
So it is you who deside how far you are gowing.

1 Like

I have 200-400GiB of headroom on my 8TB nodes. It still feels too much, so I’m considering going down to 100GiB.

All values are your responsibility and risk. From what I have seen on this forum, storj makes no exceptions when dealing with disqualifications, even if it was directly caused by bugs in their code. So if a node decides to go above the allocation and die in process - gg, wp.

1 Like

While I agree with @Vadim and @hoarder, I feel like I should add that there is good reason why larger nodes should also have a bit more room for inconsistencies. Bigger nodes will have bigger db’s, more logs and have a bigger potential of collecting large amounts of trash.

10% is a healthy margin. But good performing nodes probably don’t need as much. Especially since a couple of issues with accounting for disk space usage are already fixed. If you have good monitoring set up and can take action quickly should HDD’s fill up, you can probably assign a little more than 90%. On my 2TB node I have assigned 1.9TB, which leaves me with about 60GB free.

You should also keep in mind that when HDD’s start to become nearly full, their performance starts to suffer. So I’d say you want to leave at least 50GB free either way. So required slack+50GB should be what you have to aim for. But know you’re taking a risk. A risk I am only taking because it’s my smallest of 3 nodes and I wouldn’t care much if something goes wrong. And despite that I’m monitoring it closely anyway. I know @Vadim is running about 25 nodes, so the loss of a single node wouldn’t be a big deal. You should decide for yourself whether that is your attitude as well.