Node showing 0 Ingress for a few weeks now, egress fine

My node this month still isn’t showing any ingress traffic at all 0 Bytes.

September I did receive Ingress of 9.41GB overall. The last week or so of Sept I saw Ingress drop to 0 Bytes. September did end with 900GB+ of Egress so I knew the node was still performing well.

I’m concerned it’s the 19th October now and 0 Bytes for all satellites on ingress. Egress traffic is still showing and at 321GB for the month so far.

I’ve checked the logs recommended on here:
and I can’t see anything that sticks outs.

Not sure if I should be concerned or just wait it out. I did mention in a previous post my Node is showing -8.39GB on the “Free” section under “Total Disk Space”. I’ve made sure and checked that 10% of the hdd is allocated as free space as recommended when setting up a node. Verified this checking on my device as well.

I’m not complaining about the lack of Ingress, I know many variables on why I may not be receiving the traffic legitimately but 0 Bytes?

Here’s my Node specs if helpful.
Ver: v1.14.7
Device: Raspberry Pi 4B

This is normal. Your node is full so there is no ingress. Sometimes the node can show negative space used if your node goes over it’s allocation for one reason or another. This shouldn’t happen, but can, so that’s why the 10% buffer exists. Or if you reduce your allocated size you will also see a negative number.

You can check your disk space with df -H


Thanks for the reply and quick tip.

Seems fine showing 185GB available for the Node hdd. Is the 0 Bytes just exclusively 0 Bytes for the Node ingress, however in the background my node may be receiving ingress to regularly ping and check on the status or I assume that’s part of the audit check as well?

Ingress is counted as data uploaded to your node by customers and data uploaded to your node as part of the repair process. Since nothing else contributes to this tally, 0 bytes actually means 0 bytes.

1 Like

Thanks again, all makes sense, cheers!

1 Like

There is something else that could explain the negative space used: If you allocated 1.67TB to your node long ago, the node software used to round that up to the nearest 100GB, that is 1.7TB.

So it’s likely your node used 1.7TB until one of the recent versions that changed that behavior: It now rounds up to the nearest 10GB instead of 100GB, so it now correctly registers the allocated space of 1.67TB, so you had probably 30GB of extra data that gets slowly removed gradually with users’ deletions.

Just a guess, but this happened to one of my nodes.


Thanks for that Pac, handy to know: I’ll keep an eye on it. :eyes: