Bandwidth utilization comparison thread

Interesting! Why is it that an unvetted node gets more ingress from us2?

1 Like

Since us2 joined very late, most nodes got vetted at the same time which resulted in low ingress for all nodes but fast vetting because there was not much data on the satellite, so more audits per node.
All unvetted nodes get 5% of the global ingress while all vetted nodes get 95%. Since there are lots of vetted nodes now but almost no unvetted nodes, my node gets a big share of those 5%, resulting in a way higher effective rate than my share of the 95% of all vetted nodes on my other 4 nodes.

Edit: already vetted lol… Took only 6.5 hours to get 100 audits on us2… There’s really not much data on that satellite.


That almost sounds like a vulnerability… lol
Are you sure that’s how it works? I thought each node gets randomly selected for a piece, with higher weight given to higher reputation nodes, but the unvetted nodes get selected 5% of the times vetted ones do.
Splitting global piece distribution 95:5 would be bad for exactly the reasons you described.

By the way, it’s probably been discussed before, but how do you generate those graphs with traffic split across the satellites? Or rather where is the data pulled from?

1 Like

Most folks are using Graphana with Prometheus, there are several threads going from when the dashboard was first release by @greener; this thread looks to be a more recent one that gives a quick walk through and is intended to be stickied (we could probably use a community wiki actually, maybe even host it on StorJ?):


Assuming others are seeing the same uptick in ingress that started ~5-6 days ago, and then the uptick in egress that started ~1 day ago?

7-day time frame

Also, still seeing the roller coaster in the storage I/O?

3-hour time frame

1 Like

Yes that is how it works but that’s not a vulnerability, it ensures unvetted nodes get a decent amount of traffic to get vetted in a reasonable time. Reputation currently has no effect on node selection at all.

Let me get you an official answer (only quoted a small part):

As for the graphs, Doom already pointed to the right topic: How to monitor all nodes in your LAN using prometheus + grafana [linux using docker]


by the 95% to 5% could simply be meant of each upload… remember the files are split in to many many pieces… it’s very possible that 5% avg of these piece from each file are allocated to vetting nodes…

thus the vetting nodes will always have redundant data… which will benefit the network long term instead of being just using random test data or whatever

or thats how i would do it…

1 Like

last 9 days on my largest node

Thanks! Will need to put some work into upgrading my monitoring!

Ah I see, thanks for the correction. I thought it worked the other way around to avoid this situation. That is really an unbalanced way, especially with no constant source of new nodes, just like you described. Great on new nodes! :grin:

maybe this post fits in here as well:

SNOs can set data in of the earnings per TB stored on a monthly basis. So one can compare ones earnings (=egress) to other nodes.

1 Like

SNO log - stardate 98763.45 - local 2021-03-01:
The first few hours of today felt as though it was just going to be another day, and then it hit. At first, it appeared like just another scrub and things would get better, but as they say- things get a lot worse before they get better. …


I am vetted on 4 or 5 (not eu north) and am seeing 16-17GB Ingress per day. Is this about that everyone else in receiving?

yep i think my node got 18 for the 3rd which should be just about max…
ofc it changes from day to day, week to week, and can be much slower and even much higher even tho its not been to common lately.

ingress is basically the same across the board for most nodes, afaik… there maybe a little deviation like 5-10% but often it’s even less than that.

during stress tests last year we got 300gb a day for extended periods.
so if the traffic is there, then we can go pretty damn high…


Thank you, I just have no way to know if it the normal :slight_smile: I have 0.45TB stored (Dec 2020 node, 12.5TB disk, 80/20Mb line, UK) and the egress for me is generally 0.2% per day of data stored. Ha… 300GB a day would help fill 12.1TB space in just over a month!!! Cheers

1 Like

those were some crazy times


It is so disheartening when deletes are higher than ingest. Is there a support group? name is BadgerStork and my node is getting smaller…


My nodes also lost ~150GB in the last week. But as far as I can see, my smaller nodes are growing while my older nodes are getting smaller, so the deletes seem to affect only old data. I‘m sure it will be over some day :slight_smile:

Yes, deletes are bad these days… :confounded:

Hopefully it is not paying customers but test data.

Typical backup scenario: upload new snapshot, delete oldest snapshot. --> no net ingress (or maybe minimal due to growing backup size).


seeing the same getting 4gb a day in but 2gb a day deletes at this rate its around 5 years to fill 8tb lol

1 Like