Disk filled, and now what?


I’ve recently filled a disk of 1TB, but I don’t know what to do now. Just leave as it is and enjoy? Is it normal to have 0 Ingress and 0 repair now?

By the way, I have some questions.

1: My disk is 1TB formated with ext4, but I set my storagenode space to 930GB. Is that right or should be more? or maybe less?

2: the df -h information is not consistent at all:
The OS recognise 916GB, now I have 869GB Used but it says 100% of disk used, only 511MB free. I’ve read in internet, that the remaining space goes for journaling as ext4 is a journal FS.

3: Is my storagenode configuration right to 930GB, if my OS recognises only 916GB?

Thanks in advance.

1 Like

you cant have ingres as you dont have space, but you have Egress.
It is best situation, as you all hdd resorsed now go to serve Egress ad make money.
you dont get money for ingress, it is only needed to have data and make Egress from it.

Do you have disabled reserved blocks?

If not, than 5% are reserved.

1 Like

Hi xavidpr4,

I do not mean to be alarmist, but your current configuration is potentially dangerous and could lead to database corruption. It looks like you have allocated 930 GB on a drive that holds 980 GB (i.e. you have allocated 95% capacity). Although it could run just fine like this, it is not recommended. I think there are a few things going on here that have caused confusion. Don’t worry, I went through this exact problem at one point.

You hard drive capacity is rated by the manufacturer in Gigabytes (GB, base 10). When you run the command df -h you are actually seeing the capacity expressed in Gibibytes (GiB, base 2). If you run the command df -H (note the capital H) you will see the capacity expressed in terms of GB. If I recall, the size column will show you the size of the drive without the ext4 5% reserved amount already subtracted. So if you want to allocate 90% of the disk that is available, you need to calculate 90% of the 95% that is left after ext4 reservation.

The other side of this is that the storagenode will allocate space in whatever units you specify. So if you specify the units as GB, it will allocate based on gigabytes. If you specify as GiB, it will allocated based on gibibytes.

So run ‘df -H’ and look at the size column. It will probably show 984 GB. Take this and subtract the 5% ext4 reserve space:
984 GB x 0.95 = 934.8 GB
Then take 90% of this value:
934.8 GB x 0.90 = 841.3 GB

The reason you see 511MB free space is that the node is designed so that when it sees there is only 500 MB free space, it stops accepting uploads.

You can run the node with less than the 10% space, but you would be doing so against recommendations. This 10% extra space is especially important with small disks like this one to allow for overallocation, databases, moving of trash and garbage collection.

TL;DR The allocated space value for your storage node should be (probably, depending on what you see from df -H) 840 GB. You can change the setting in your docker run command and the node will slowly regain free space as pieces are deleted. This would be my recommendation. If you need instructions on how to change parameters and restart the node, they can be found here.


I really like your comment!!

I have learned lots of things. I have to say that I have multiple nodes, and this one is the smallest and my test node, so in order to not fall in the same errors for the rest of nodes i’ll change my configuration as you say.

Just to confirm, I should configure my storagenode with parameter STORAGE=“841GB”, right?

BTW, i’ve heard that the 5% and 10% “rules” only apply for small disks, but when you run big hdd, these percentages are reduced a bit, maybe instead of a 10%, its a 7% or 5%, as 10 would represent a lot storage.

For example for a 4TB disk:
4000GB x 0.95 = 3800 GB
3800 GB x 0.90 = 3.420 GB

It’s a lot of space “lost” in my opinion. So can you confirm if these percentages are kept in all scenarios?

Thanks a lot in advance to all!!

1 Like

Just to remind you just cause the box says 4TB doesn’t mean you have 4TB to play with you really have 3.6TB to play around with.

Yes, this is correct.

You can allocated a more than 90% on bigger disks. I know a lot of SNOs do, especially on 8TB and larger. On my 4TB node I still use 90%, but it could probably be safely increased to 95%. When allocating more than 90% it comes down to preference. Risk vs reward. For me though it’s not really worth the extra $0.30 per month I could gain by allocating 95% vs 90%. I agree that when you start getting into 8TB and larger disks, it does leave a lot of extra space that could be used.

Some SNOs will reduced the amount of ext4 reserve space as well (as per thej’s post above). The space is technically available on the drive, but can only be used by root to prevent the user from filling the drive beyond the point where the system can no longer function. Since you are just using this drive for storj, you could reduce the amount allocated for ext4 reserve space. I personally wouldn’t bring this down to 0% though, maybe 1% or 2%.

1 Like

You actually do have (almost) 4.0 TB, which is 3.6 TiB.


In that case you have 3.7 total sometimes I forget how linux handles sizes.

Btw, my big question now is, what can i do to reduce the space of that this that is filled too much?

Should i change the parameter STORAGE=“841GB" and nothing more, or I have to use a more polite way?


That is currently the best way to reduce the amount of space that is being used. Don’t forget that you must remove and recreate the container to do this. The space will continue to be used until pieces are deleted, so it won’t happen right away. Used space will slowly go down over time.

Perfect thanks!!

I was afraid that doing that would be the same as going to the disk and start deleting files hahaha.


1 Like

Very nice.
I wish we could have a calcuator for that in the dashboard.


That can take a very long time, I’ve decreased the size of one of my nodes like a month ago and it only decreased by about 8GB.

1 Like

Yes that’s true. It takes a long time until the space is available again. I limited my node to 2TB a few weeks ago because I increased my other node here. So far, 8GB has been free.

I wish there was a way to reduce capacity immediately. But of course this could be abused by SNOs who are not satisfied with their egress to get rid of their files and then increase size again to hope for “better” pieces.
Maybe there are also additional concerns.


Yes, this subject was discussed on the forum some time ago if I recall correctly.

Just delete one of the satellite folders and get disqualified


Why would you delete it? You could just graceful exit one of the smaller satellites.

I wouldn’t recommend either though. Having a small amount of free space is a little risky, but I think in this case there is enough to wait for the deletes to happen naturally.

1 Like