Storage size rounding

I was changing my storage size setting to allow more ingress and I noticed that once I go past 666GB it automatically seems to internally round up to 0.7TB (700GB) and I can no longer effectively chose storage size in 1GB increments. Everything from 666 to about 750 GB behaves as though I choose 700GB. If I go past 750GB it rounds to 0.8TB (800GB) and then my node will start downloading 100GB of customer/test data.
Is this intended? I would prefer to have more fine grain control of the storage size parameter.

1 Like

What does the log tell you is the “Available Space” ?

If running docker on GNU/Linux …

sudo docker logs storagenode | grep -i available | tail -n 2

or for external logging:

cat node.log | grep -i available | tail -n 2

if I set storage to 749GB I get : “Available Space”: 845644640
If I set storage to 751GB I get : “Available Space”: 100838685024

That is a bit strange… Add 2GB and the node takes 100GB.

How much total physical space do you have on your HDD? How much have you assigned so far ?

It’s a 2 Terabyte external drive that I use for storj and some backup files. At the moment I have assigned STORAGE=“700GB”. In the past I liked to slowly increase the storage over time but now that I’ve passed the mysterious 666GB threshold it seems I can only increase storage in 100GB increments.

Set 1 fixed value and keep 10% for overhead in order to avoid your node getting in to unwanted issues.

I have 1.1 TB free at the moment so I’m ok with space but I’d still like to increment more gradually so I’m not slammed with 100 GB of test data at a time. Hopefully other users with smaller and almost full drives don’t run into issues with the 10% overhead calculation when their node is not using the actual storage numbers they tell it to.

just curious how you will be able to tell the difference?
Most threads Ive seen here suggest you will ge 200-300GB a day independently how large you have your threshold

direktorn: That has not been my experience. I watch the logs and I see that when the “Available Space” drops below 100MB, ingress(customer uploads) soon stops. The “Available Space” is tied to the storage setting but it apparently gets rounded using some type of dynamic unit rounding system.

1 Like

It is because you have files in the trash folder. As soon as stored data + trash reaches your set space, you won’t get more data until some files in trash get deleted.
I don’t get it why you increase the size of your storagenode gradually. Just set it to a value you have spare and let it fill.

donald.m.motsinger: I was describing normal node behavior to direktorn who seemed to suggest that a node would continue downloading 200-300GB of customer data even if it had exceeded the storage limit, which is not the case. I believe what I was describing is the standard behavior even without trash in the trash folder. less than 100MB available space is the threshold below which to immediately notify satellite of low disk space.

I choose my storage limit due to personal preference. Different strokes for different folks.

1 Like

I’m sorry I didn’t cover all use-cases in my statement, I don’t know what happens when you have 100 MB left. I don’t consider that as a normal day-to-day use case

As more nodes become full of test data, it will probably become more normal.

See https://github.com/storj/storj/issues/3764
Already reported that a while ago

Looks like that was about 3 months ago. I guess they don’t consider it a bug or they’d rather work on other stuff right now, or maybe they don’t read those issue reports. I might have to learn golang and try to fix it myself, along with the dash board that lists the trash folder size as available space.

Well I’m sure they’d appreciate a Pull Request that fixes both errors :joy:

Otherwise we’ll just have to wait until someone eventually fixes these issues. As they are both rather minor problems that don’t really impact the storagenode operations, it’ll probably take some time.

Noticed that just now. I think it should be fixed, don’t see why this limitation is useful anyhow?

I’m gonna add a note to the github issue.

1 Like

I was experimenting with a non docker build of storage node and I did not notice this issue. Perhaps it only affects docker installs where the storage setting is passed through the command line instead of read from the config file. I wish I could use the config file to set the storage limit with docker but if I omit the storage parameter from the docker start command it defaults to 2TB instead of using the value in the config file.