PSA: Set reserved space to 0% on ext4

By default, 5% of space on ext4 partitions is reserved. On large drives this can result in hundreds of gigabytes being wasted! If you are using ext4, make sure to run sudo tune2fs -m 0 /dev/yourpartition.

I only realised this just now when looking at df output and noticing that Used and Avail did not add up to Size. By doing this on all my nodes I got over a terabyte of additional storage, for free.


Cool, nice to know. But are there any risks in doing this? I mean there should be a good reason why there is that reserved space, right?

1 Like

This space is reserved for root users only. This is useful in multi-user and desktop environments to prevent regular users from filling up the file system completely, causing the system to break. That way even if a user “completely” fills up the disk, core system processes can still write log files etc. It is less useful for storj, where the disk does not contain the operating system and is reserved for one or two applications. There is no need for hundreds of gigabytes of emergency space. If the disk is your OS disk, you can set it to 1% instead of 5% if you want.


Okay makes sense. Thank you, this will save me almost 1TB :slight_smile:

1 Like

Completely filling the drive will impact performance in a big way though. You should still leave about 10% free. But that’s 10% of more space if you change this setting. So it helps but don’t fill every last byte. :slightly_smiling_face:

1 Like

Yes, it is worth noting that fragmentation gets a lot worse when completely filling the drive. Also, Storj recommends leaving 10% of the disk free in case of a software bug. However, on most nodes I “play risky” and only leave a few gigabytes free. Because Storj is mostly random IO anyway, at least currently from my tests I found that fragmentation does not affect node performance noticeably.

1 Like

A bit out of topic but… Any reason why StorjLabs suggest leaving that much space free? That’s a lot of GB on large drives… On a 8TB disk, that’s 800GB of free space! Is this really needed?

Honestly… Probably not. But better to leave too large a margin than too small.

The reason is to be able to deal with database sizes increasing, logs, garbage and trash that may not be accounted for. And also anything unexpected. I run one node with a little less than 5% free. I know I’m taking a risk there. It’s not my main node. I’m just curious to see if it can survive it. But that node has logs, db’s etc on another HDD. So this is a case of do as I say, not as I do. What I’m doing there is generally a bad idea.

1 Like

Are these are two different sizes under discussion? One is the amount of physical space the OS reserves. The OS hides this space, in effect, reporting total disk space as (physical - reserved). Everything outside the kernel - apps, databases, etc. - only see the amount the OS reports. The Storj recommendation to leave 10% should be calculated on the OS reported space, not the total physical space.

The initial recommendation was to use tune2fs to set amount of physical space reserved by the OS. After reducing the reserved space, the OS reports more total space. You want to limit Storj to 90% of the reported space.

1 Like

It should always be 90% of the space Storj could have access to.

I only have 3% in reserve. :rofl: Never had any problems so far. However, I also run my databases on an SSD.

My free space always fluctuates between 204GB and 207GB.

This is useful, thank you.
But for Storj it is encumbered by the storage size rounding issue. Takenwith that into account it is helpful if you get enough free space to push you up to where you can increment your storagenode by another 100GB. Forum coverage: Storage size rounding and GitHub issue:

Do you mean to say your storagenode databases are mounted on a separate SSD from your storagenode data? Or are you simply referring non-storj related databases?

The rounding issue was resolved in v1.11.1. I believe it now rounds to the nearest 10GB increment. Looks like the github ticket was never closed @Alexey

1 Like

@storaje & @baker: I can confirm it now rounds up to the nearest 10GB, I did experience this when fiddling with my nodes a couple of weeks back.

See Changelog v1.11.1


your storagenode databases