The console error doesn’t even say to which node this error is referring to.
As far as I see it the multinode had the correct data. According to multinode the node had overusage date and therefore no more ingress happened.
So the node dashboard was at fault and showed other values for some reason.
This is something that should never ever happen. How can you rely on any of this data, if at one time the MND could be correct and at another time the node dashboard could be correct. Maybe another time none of them are correct?
I don’t think so. Because in my case a simple restart solved the discrepancy. If it was a general issue a restart without changing anything would not have resolved it.
For some reason your node believed that it has no free space in the allocation.
Since you do not have old logs, it’s hard to say why.
But from the issue and pull request above I can assume that the MND is more accurate in displaying an occupied space (and exactly this reported to the satellites). And this PR will bring the same feature to the single dashboard as well.
I don’t understand the whole thing: The docker inspect gave me allocated space 6.5TB.
This means the node was running with that configuration.
So the node dashboard was correct.
But for some reason the multinode values were the values that were transmitted to the satellite and that’s why the node did not receive any more ingress.
I don’t know how multinode can believe 6.01Tb are assigned and node dashboard believe 6.5TB are assigned when docker cleary says 6.5TB are assigned.
This is how it works: you allocated some space, then started the node, the node detected that its accounted space and the allocation is in the conflict, so it will allocate the minimum from the allocated and physically used + physically free space.
I would speculate, since I do not have logs, but likely during the last start it has either didn’t recognize the used space fully, so the difference was less than allowed allocation, so it’s allocated the minimum - i.e. much less than you specified as an allocation, or the recognized space has been corrected to a new (but outdated) value and it turns out that you do not have a free space in the allocation.
When you restarted the node, it’s recalculated the allowed minimum (allocation minus used is not greater than a physically free space) and allocated as you specified.
I can’t tell.
But at least there should be no differences in values. Node dashboard and multinode show different values for total disk space, free and overused.
Yes. And this is what the fix about - to make them equal.
However, I do expect even more confusion - because the single node dashboard likely will show a different (less) value, than you have allocated in some cases.
I had more nodes with the same issue now:
Showing no ingress, only downloads. One did not have ingress since the start of the month.
This one I have just restarted without any config changes and ingress flows in.
Something is broken.
I am wondering if the trigger could be when the allocated space is higher than the actual disk space and there is no more disk space left. This will stop ingress. Now when disk space gets freed up again either manually or by trash-gc, could it be that the node does not report this now available space to satellite again and therefore remains stuck as full for the satellite even if there is space available again?
This could explain why a full restart would be required as only then the updated values get reported to the satellites again.
Nodes reports their usage to the satellites on each check-in, it’s every hour by default.
Do you have upload errors related to not enough space in your logs?
I do not have either.
But each upload request is telling how much data it want to upload, every failed also contains the reason.
It should be close before the uploads are stopped.
The node currently sees egress only. Last ingress according to logs was around 3 hrs ago.
In the logs I see "Available Space": 461144064. If that is Bytes, then that would be 460 MB, right? df says I have more than 900GB free on the disk.
But here it seems a matter of allocated disk space. But the display of free space either in the MND or ND is at least confusing and the ND does not display the overusage that might be the reason for the node not getting ingress.
Do not forget to let 10% (safety buffer) free on disk. (I think there is a bandwith of 500GB where deletes/trash and aktivating ingress again is happening. so it looks pretty normal to me with 418gb trash)
This is maybe of the different data accumulation for ND ans MND.?
But confusing: Free is not really free. Free is shown as 62.48GB or 0,97TB. One dashboard indicates overusage while the relevant dashboard and the API does not.
I believe free should be what is actually available for the node and not some theoretical number without relevance. At least I don’t see any relevance to it. When I see 62.48 GB free then I think all is fine and node has still space left. Even more as there is no overusage shown.
I have another node where it seems to match better: