My node is almost full but it still has more than 500Gb free space and i get very little ingress. Less than 100Mb for today compared to my new unvetted node getting more then 1Gb.
- How “full” is your trash?
- In the run command: did you specify the full disk space or a bit less?
I tried to set disk size bigger than the actual disk same result.
I have 559Gb free on the actual disk.
Ok, so there is a discrepancy between the 1.73 TB “free” and the free disk space visible outside of your node (0.546 TB).
Afaik the 13 TB are reflecting the number stated in the run command, no matter if that’s the real size of your disk. The rest is calculated i assume.
The questions which came to my mind:
does your hdd have some deactivated sectors, so that the full space is not available?
are there any other files besides the ones from your SN on the disk and using space?
It is a 12Tb WD Gold drive formatted to ext4, dedicated to storj, nothing else on it. I thought the same, looks like exactly 6.5% actual free space on the disk.
If it’s a 12 TB disk, why do you have allocated 13 TB to the SN?
Seems that we talk about “only” 184 GB, finally.
Because i wanted to test why i don’t get ingress. It was set to 11.8 Since 100G free is recommended. And more than 500G is still free.
What version is your node running on? If it is more than 2 minor releases behind the current (1.50.4 currently), your node will no longer get ingress.
I get ingress, but it is very low. It almost looks like my node capped at 11.22TB. Even if i increase the max size.
Your screenshots says „11 T“?
And your orders.db seems to be extremely large. Not sure, you probably can shrink it? Just an idea.
Looks like midnight commander tricked me about available space.
And i just simply don’t have more free space.
you really shouldn’t let a storagenode fill the storage media that much.
it will need room for trash and such… the recommended is 10% free
ofc you can most likely do with much less than that on a node of that size… i would say anything less than 100GB free space is risking damage to your node, but that would just be my random guess…
you have been lucky the node storage fail safes actually kept it from making a mess of things.
StorjLabs says the default is 10% for a good reason i suspect.
it is nice to know that nodes might not get wrecked by filling it to capacity
not sure i would risk it tho…
but my storage solution is a bit more complex than just a single partition.
there are also a few different calculations to take into account a
13TB disk will actually be 13.000.000.000.000bytes.
and some calculate in TiB which is 1024 to 1 kibi based… because its binary
so 13TB disk with be like 11 or 12TiB formatted
Yeah, i already reduced the max size of the node. I just hate the different units. And sadly i started my 2nd node too late and it is still vetting.
We had the “free space for trash etc.” discussion a couple of weeks ago and we came to the conclusion, that around 150 GB should be enough. 10% on a 10+TB node would mean more than 1 TB free for “trash” - which never will be the case.
Great to have you figured out where the problem was.
What will you do now? Add more space with an extra hdd?
yeah my gut shot would be something like that…
maybe we should have StorjLabs update their recommendations… ofc they might be since its years since i read them.
sure 10% seems like a lot… beyond a certain point it would “most likely” not matter anymore…
but guestimating that and being 100% sure is two very different things…
imagine it would take like 4 years to get a node to that size today…
does one really want to risk losing it, because one runs outside the recommended standards…
maybe… surely 10% is to much…
but since the node did seem to stop itself… maybe we don’t need to reserve space
only one way to find out…
Haha, my hero. It was a discussion, where Alexey was involved. Cannot find the post. Anyway, @daniel1 is already over that threshold.
I’ve set the threshold to 500 GB (for a 10 TB hdd). But I also have still a lot to go (21% filled).
Yes, i already started a new node on another hdd, but it’s still in the vetting phase.