I’m trying to understand a significant ingress difference between two Storj nodes that appear to run under nearly identical conditions, and I’d appreciate technical input on what Storj selection logic may explain this.
Setup
Both nodes have:
Equal hardware specifications
Equal CPU performance
Equal RAM allocation
Equal disk type and equal disk speeds
Equal internet speed / bandwidth
Equal network stability
Equal uptime
Both fully vetted
Different /24 subnets
Both subnets currently show 2 neighbours
Observed ingress difference
One node consistently receives more than 3x higher ingress than the other.
Example current daily ingress:
Node A: around 9–10 GB/day
Node B: around 30+ GB/day
This has persisted over time and is not just a temporary spike.
Reputation / health
Both nodes show excellent scores:
auditScore = 1 (or extremely close)
suspensionScore = 1
onlineScore ≈ 1
So there does not appear to be a reputation issue.
Main known difference
The main visible difference is node age / stored amount:
Newer node
Joined: January 2026
Average stored data: ~118 GB
Ingress summary: ~269 GB total
Older node
Joined: September 2025
Average stored data: ~1.8 TB
Ingress summary: ~912 GB total
What I’m trying to understand
I understand Storj selection is probabilistic and not equalized, but I’m trying to understand why two nodes with otherwise equal technical conditions still show a persistent 3x+ ingress difference.
Questions:
Does current stored amount strongly influence future ingress?
Does repair traffic reinforce already larger nodes?
Does node age continue to matter significantly after vetting?
Can historical satellite trust still favor older nodes despite equal current scores?
Is this level of difference considered normal long-term behavior?
Additional observation
The larger node also receives noticeably more repair ingress, which makes me wonder whether existing stored data creates a compounding effect.
Also, an answer might be “the other node on the subnet might be full, resulting in more ingress” - but that does not explain over 3x ingress. Also, i do see similear stuff (but more like 2x) on nodes that also are identical but only have 1 neighbour - so, this cannot be the only answer.
Question to experienced operators
Have others observed similar behavior where two technically equal vetted nodes, with equal neighbour count and equal subnet separation, still show one node receiving 3x or more ingress over longer periods?
I’d be interested to understand whether this is expected Storj behavior or whether there are additional hidden selection factors involved.
Neighbors-being-full can easily explain ingress differences: why do you dismiss it?
The larger node may also have lower latency to a customer uploading bulk traffic: it’s simply winning more of the data. We’ve had massive ingress spikes every night for months now - if you’re winning more of it one night… you’re probably winning more every night (since it’s coming from the same places)
You have no control over neighbors who may be competing with you, nor where the uploads are coming from.
Re-read i see the same on nodes that have no neigbours - some nodes, same config, same hardware, on subnets with 0 neighbours - one gets 30GB a day - another get 60GB. but how does a larger node get more data? both have WELL over 1TB free space.
Even on the nodes with no neighbors - so, this cannot be the full reason.
I have seen three identical nodes sharing same ip where one node always had tree times the ingress of the other two. I can only guess this is because of node selection not beeing perfectly randomized.
I would agree, this is due to node selection, but I would disagree with the reason: it’s actually is perfectly randomized. It’s randomized, nobody promised it to be homogeneous.
It’s like how people complained about early iPod shuffle “shuffle” feature where it would repeat songs or play a few sequentially, so Apple made it less random (removed repeats and sequential plays) in order to appear more random to users.
I think the same happens with OPs nodes. And that’s fine. Op shall stop scrutinizing them.
I am not complaining, just trying to understand things. When doing the node selection for those 3 nodes 1 million times, should the resulting upload distribution not almost be equal?
So, one neighbor or zero neighbors? Because 2x difference is consistent with one having full neighbor and the other – not.
Anecdotally I don’t see absolutely any difference in egress on any of my nodes, except the one that has neighbors.
I asked robot friend (gpt-5.4 codex on xhigh reasoning mode) to analyze satellite code around node selection randomness, and provided your notes. There are some things that are configuration dependent, which we don’t’ have access to, but generally, pure node selection algorithm aline is likely is not the only contributor:
Here is their full analysis and follow up questions:
----- 8-<-------
I checked the current satellite code.
What the code does:
Default upload target selection is subnet-declumped random.
The selector groups by last_net, then picks a node inside that group randomly.
Repair target selection reuses the same upload selector.
Default selection does not directly use node age, stored amount, or piece count once a node is eligible.
What that means:
A persistent 2x to 3x ingress gap between two truly equivalent fully vetted nodes is not explained by “randomness”.
More repair ingress on the larger node does not prove a separate repair bias. Repair replacement nodes go through the same selector path.
If this difference is real, the missing variable is more likely per-satellite vetting history, actual network identity seen by the satellite, or some other eligibility difference that is not
obvious from the node dashboard.
There is also an inconsistency in the description here:
one place says 1 neighbour
another says 0 neighbours
Fix that first.
Questions:
Post ingress split per satellite for 30/60/90 days. Total-only numbers are not useful.
Post the exact date each node became vetted on each satellite. “Fully vetted” is too vague.
State exactly how neighbours is being counted, and whether both nodes are seen as IPv4, IPv6, or dual-stack.
----- >-8 -------
It totally should, unless there are consistent factors that skew the process. it’s still random, but no longer uniform.
Footnote: I generally strongly despise when people post unadulterated AI vomit on forums; in this case, I used AI as a tool to analyze satellite code, and run experiments, and to summarize findings. I stand by the writing and take full responsibility for it, as if I wrote it myself.
What do you mean by “ingress”?
Do you actualy mean the ingress that is diplayed
in the first graph of the dashboard?
Or do you mean stored data from month to month?
The neighbour is a problematic variable; you don’t know what he is doing.
The best way to test this is to have a subnet with no neighbours and run both nodes on it. You control both, the conditions are the same, so you eliminate any unknowns. Like me…
I run 18 like this. I will take a look at the ingress in the first graph for this month.
The underperformers are on a good internet connection, but there is a lot of secondary activity, for games etc.
Another observation: 2 nodes get like 7.5% more ingress than 1 on a subnet.
I have 1 pair of nodes in the same disk. It used to be predictable. Lately, let’s say this year, they can have very different ingress…
No, neither is full…
Looking at the traffic now on my nodes, daily ingress rates should differ by less than 10%, maybe up to 20% in corner cases (like, high variance in file sizes and correlation between file size and success rate, the latter becomes especially a factor if you’re far from customers) if there is truly no difference between these nodes. Just law of large numbers and pretty much any classical probabilistic bounds show that the variance of a daily mean is pretty small.
Not doing any formal statistical tests here, but a 3× difference is well out of the norm.
It would be even better to count just the number of attempted uploads, not total size of completed uploads—then the standard deviation should be at less than 1%.