ingress have in the past been the same across most nodes… down to an accuracy within 1%
these are my numbers for this month… yours should be about the same across all the nodes on your ip…
(meaning the total nodes bandwidth utilization will equal the same as other ip node totals)
if that makes sense… each subnet/24 gets an amount of ingress allocated and that is very precise… the numbers of nodes on the subnet splits the allocated data
if it doesn’t match up there has been something wrong… which it sounds like to me…
Hi all, I’ve had the same issue with my graph displaying 0 Bytes ingress for over a few weeks now. At one point my 1.7TB was full then over 300gb free yet no ingress, kept thinking how does it know to free up.
My node has been updating automatically itself and the egress traffic seems normal. Currently on v1.14.7 for the last 12.5hrs.
I have ran “docker logs storagenode” to see if anything stands lit but nothing to me.
Any other pointers? Perhaps I can stop the node, remove containers and restart my node? Rasp pi 4b.
I’m now in month 9 and this is the only issue I’ve come across. Can I check ingress some other way?
I think ingress generally is very low atm. Several reasons maybe: Low customer activity, no new customers, change of uploaded piece numbers, less testing uploads from Storjlabs, repair backlog done.
you should have ingress when the node is running normally…
tho there are a couple things that can give you zero ingress.
suspension and if you are running an to old version… not aware of anything else that can give you zero ingress, and since it’s not the latter then i would guess it may be the former…
has been taken down… tho in your version of the dashboard it might not show… i haven’t gotten around to updating yet because of the whole version number scheme being off, so i figured i would wait a day or two.
ofc there may be some currently undiscovered problem, but i doubt it… haven’t seen anyone complaining about version 1.14.3 or the “new” version 1.14.7 which is the same code as 1.14.3
so yeah check your suspension score, and if you are suspended figure out why so you can try and avoid it in the future, a suspension lasts for a month i think, and basically means your node will not get any ingress in that time period.
yeah the suspension & audit overview still has stefan.benten on it, just updated my test node to check… so your suspension & audit overview on the dashboard should look pretty much exactly like the screenshot i posted, or very close to 100%… i forget how low one can get before getting disqualified or suspended.
I’ve checked and all looks good to me on the dash. I’ve checked the API endpoints and ingress is 0 in there as well.
Good point regarding the no space available. Last month you may not believe me now but I did have 300gb odd free for a week or so with still 0 bytes ingress. Today though I’m seeing below…I did keep to the advice set as well when setting up to reserve x amount on the hdd.
i should have mentioned that, to obvious… for my mind to comprehend i guess…
i keep forgetting about that because thus far i’ve just been wanting more ingress… running out of space seems so far away at the moment, this is the second damn time i forget people could run out of space… tho in this case he did say he had 300gb free… ofc the node maybe set to having 300gb more, without having the disk capacity i suppose.
I’ve fallen in that trap many times before. But sometimes the obvious solution that seems too obvious is the correct one. Looks like it was actually spot on in this case. (I only thought to mention it because I saw the same thing in another thread where a person overlooked that their node was full)
There was a code change in v1.12.3 that caused the free space to be displayed as disk (volume) free space instead of allocated free space if that node had large amounts of trash or if the free space was negative. This was reverted in v1.14.4 so that the node will now show negative space instead of the disk (volume) free space. So you are probably not crazy.
the free space is of the allocated space you select in the docker run command…
so you could have a 10TB hdd with only 1tb space allocated and then it having 1.01tb and thus having 0.01tb over max but still having 8.99tb free
it will never be 100% accurate because it cannot keep track of exactly how much space stuff takes all the time… or it can but that costs resources.
it will not take any ingress until its below the max and then maybe go over a bit again… but should start close to the max within some % offset either over or under.
If you actually have free space on the disk it will not be a problem. If you physically do not have a free space, then it’s problem - the storagenode will not be able to update databases and thus you will not be paid for egress and storage, because storagenode can’t store orders and thus nothing to submit to satellite.
This is one of the reason why we suggesting to allocate not more than a 90% of free space.
Thanks again for your time. I haven’t made any amends to the space allocated, I have a doc on my Pi with the run command to make sure the same each time.
I kept to the advice on the docs so nothing has changed. I will take a look later at the drive and see how much is showing in the OS.
it’s recommended to have 10% free capacity on the drive, due to sometimes data blocks will be sorted and such which requires more space than the data being reorganized.
a bit like one should never fill ssd’s past 80% because then they can wear out, due the blocks being used will always be the same and with little free space it cannot allocate new blocks when trying to “defrag” reorganize the data… maybe convert it from SLC to QLC
but yeah you should have 10% free space on the drive… even if some do edge closer i wouldn’t recommended it in 99% of all cases