Storage space does not increasing after chaning config file

Hello, my node still have some space on the drive. I see 133Gb free.
My configuration file had this config: -e STORAGE=“5450GB”
I’ve changed it to the new -e STORAGE=“5500GB” \ but the dashboard does not show increased available amount. It still says 98Mb availabe. Why is like that?

You should leave approximately 10% free for garbage collection and removal.

I would not expect 100% of my allocated space to be used - files get uploaded and they get deleted.

i think this was needed in the earlier versions, when the problems happened with over usage. While now i see very stable usage. Never seen over usage.

The size of databases is not accounted, so, please, leave some room for them. You are risking to do not accept orders for payment because there’s no room to add a record to the database.
Is it worth to have more used space, but not able to receive payments for it?

2 Likes

The node seems to convert a users input into whatever unit it feels is appropriate (MB, GB, TB, ect) and then round the number with limited precision. Storage numbers above about 666GB are converted into TB and then rounded to the nearest 0.1TB. So in many cases you can only effectively set your storage limit to the nearest 0.1TB, example: 5.4TB, 5.5TB, 5.6TB.

It is a known issue.

3 Likes

@Alexey i leave about 30-40Gb of free space in each node. Is this to small amount?

@Mark that’s what i thought :smiley: thank you.

That does not sound like enough. The guideline is to keep about 10% free.

mine has always been different in the config file… i assume its because my docker run command supersedes / overwrites / takes priority over the config.yaml in that regard.

i have changed my node space a couple of times… every time stopping the node. using
$ docker rm storagenode

then the docker run command
as stated in https://documentation.storj.io/setup/cli/storage-node

$ docker run -d --restart unless-stopped --stop-timeout 300 -p 28967:28967 -p 127.0.0.1:14002:14002 -e WALLET="0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -e EMAIL="user@example.com" -e ADDRESS="domain.ddns.net:28967" -e STORAGE="2TB" --mount type=bind,source="<identity-dir>",destination=/app/identity --mount type=bind,source="<storage-dir>",destination=/app/config --name storagenode storjlabs/storagenode:beta

as you can see in the rather extensive command line parameters, it contains the STORAGE designated for the node… the config.yaml seems irrelevant atleast for me in this case.

mostly because i don’t like to change the config.yaml because it has gone kinda crazy on me in the past, when trying to tinker a bit.

But, because of the storage size configuration rounding, i see the problem for small nodes. With the huge data storage, it is OK to have 100Gb “step”. But for very small ones (like 1Tb) or less, 100Gb “step” is the problem. Now i understand, why 1TB drives, always have arround ±30Gb free space even if i set MAX 870Gb, because it’s rounding to 900Gb. and 1Tb drive is 930Gb in total.
This means, that 1Tb drive can either have 900Gb filled up with (30Gb free), or 800Gb filled up with (130Gb free) which is a bit to big amount to keep it not in use.

1 Like

A 1TB drive has 1000GB or 931GiB. You should assign no more than 900GB (838GiB). 870 does not make sense in either unit. It’s either too much if you’re trying to assign GiB (and if you are you should use the appropriate unit) or it’s smaller than it has to be if you’re trying to assign GB. If you set it to 900GB and the disk only has 30GB left after it’s full, you’re seeing the exact reason why that slack is advised to begin with.

Since HDD’s use rounded GB values anyway (like 1000GB) there is always an appropriate setting that will work just fine without any rounding. Like 900GB.

That said, I think the rounding is still confusing and totally unnecessary unless you’re trying to display sizes. (And even then the unconventional rounding Storj uses is confusing as you can drop from displaying 4 significant numbers to only one and losing a lot of detail)

the nitty gritty details on how much space data takes can been quite overwhelming and will most likely require weeks of learning about the subject to truly start to grasp it… and thats just the basics of it…

ask how much space your 6 tb hdd has and you will end up with 10 different answers and all of them would be right…

but in plain terms… there exist overhead with almost all sort of data on many different levels… like when you print a text… you will always use 1 sheet of paper… this happens on multiple layers and creates sometimes huge inaccuracies in data sizes.

get an old hdd on 512n and write to it with 4k blocks then the same harddrive will only half 1/8 the number of max files…ofc if all of the files are larger than 4k, then it shouldn’t matter ofc this depends on the type of file system, because not all file systems have a block to file relation.

i got 15.53tb free which is 64% according to the webdash… so 15.53/64 and then i multiply that by 100
which gives me 24.26tb which is basically within a 1% deviation… and i have 24tb set to the storagenode… which is currently using 8.45tb checked that earlier today.

8tb being 1/3 of the total 24 so 33% + about 1/16th of the 33% that so 2% giving it a 35%
which leaves us with 65% left out of 100% which is within 1% and since the % isn’t written in any less than whole % then that is a deviation that we should expect to see…

like brightsilence said, the overhead on 512n and 512e drives are 10% i think and on 4kn drives you only have 3% overhead… if it annoys you that much then get drives that utilize the space better… but you will never get away from having overhead… on many levels…

the numbers look fine for me… actually when i checked today it takes up less space on drive than i’m being paid for :smiley: tho only 5% off, but thats most likely my system managing the overhead correctly by compression and using variable block sizes.

ofc it being 512 based drives i already paid my 10% overhead on capacity and then my 25% overhead on raid drive redundancy… so even tho i got 48tb worth of hdd space i only end up with like 30tb useful capacity.
in my present configuration.

ofc on top of all that if the system is poorly configured then one can loose a ton more to blocksizes / file size.

1 Like

Parameters in the command line have a precedence above options in the config file.

Is this how you change the allocated storage space. i didn’t know. Thanks wondered how this could be done.

Usually like this: https://documentation.storj.io/resources/faq/how-do-i-change-my-parameters-such-as-payout-address-allotted-storage-space-and-bandwidth

However, if you do not have a free space (don’t forget to leave 10% free for any case), then you can either migrate to the bigger drive or start a second node.

1 Like