Bandwidth utilization comparison thread

As promised, the dashboard stats for August 11:


Cacti graph:

I chose time from 03:00 to 03:00 because cacti graph is in local time (GMT+3), while the storj dashboard IIRC uses GMT.


One of my oldest nodes is not seeing good egress. This node was full during April (storagenode considered it full even if it had free space) so I guess data is downloaded following a date range rule (not realistic).
if you want a good egress each month you need to keep adding TBs to your nodes :joy:

1 Like

you don’t really need to do two screen shots, just the one of the dashboard with egress and ingress combined and the date marked so it shows the info bubble.

it’s weird tho…
i think it’s been established how accurate ingress can be, usually with in 1% deviation between nodes… so long as nodes are running without limitations…

and yet i get this… i am running other services, but should really affect my node, might eat some cpu time and some bandwidth… but nothing of note… but it seems like i do get less ingress… and yet
there is no sign of me lacking potential ingress which is cancelled… in any significant amount.

so why is it you are at 46.3GB ingress and i am at 41.4GB
that’s just weird… ofc we might be seeing the granularity of the satellite distribution of data, deci GB is most likely in the lower end of what that was designed for…

i don’t think so… it’s a gamble… some data will be long term payoff data and other data will be short term payoff… while other again may be stable continuous payoff and yet other might be no payoff…

depending on which period the node’s data is from gives you a certain distribution of the previously mentioned data models.

yes keep adding TB will avg the numbers out, but the best egress can be from any period it should just be a gamble… also you cannot know if your currently stored data has high egress to stored ratio… you just know it hasn’t had it yet… most egress is ‰ pr day
lets say 5‰ for easy math… and then lets grossly assume there is 100 days in 3 months… so 50% of the stored data downloaded in a 3 month period…
so really if the data is dead for 6 months and then all is downloaded is equal to 5‰ everyday for 6 months.

so i can be really difficult to tell what data is highest egress… because the present view is only a small part of the picture… ofc only things like backups might actually read all the data…

Seeing a nice up-trend in egress over the past couple of days.
Node 1:

Node 2:

Node 1:

Node 2:


i wouldn’t mind trading a bit of egress for some ingress if it wasn’t like a straight up 1 to 1 or worse.

but yeah, egress is looking quite strong atm… here i was thinking it was my node that finally has gotten a bit more age… never really seen egress like this before, been pissing about at like 500kb/s max for most of my node life.


My egress is creeping up a llittle , ingress rose and the it’s looking like tipped and heading down again…

Date IngressT EgressT [GB] StoredT [GB] egress ‰ egressT EgressT [kb/s /TB]
02.08.2020 13.3 4.32 1.97 2.19 50.03 25.39
03.08.2020 11.62 3.27 1.98 1.65 37.86 19.12
04.08.2020 19.41 5.96 1.99 2.99 68.98 34.66
05.08.2020 32.14 7.29 2.03 3.59 84.39 41.57
06.08.2020 43.68 6.71 2.06 3.26 77.66 37.7
07.08.2020 49.5 7.8 2.1 3.71 90.23 42.97
08.08.2020 47.37 4.52 2.15 2.1 52.29 24.32
09.08.2020 38.48 7.68 2.17 3.54 88.91 40.97
10.08.2020 39.98 8.38 2.21 3.79 96.98 43.88
11.08.2020 40.28 8.7 2.24 3.88 100.65 44.93
12.08.2020 36.84 9.64 2.27 4.25 111.58 49.15

and you can copy paste chart from excel and it treats it like a picture - never new that trick! :open_mouth:


So far I like the egress this month

The biggest contributor is saltlake:

However, it seems that there is a lot of GET_REPAIR traffic, so, I guess the payment will be lower than I would expect for that traffic:

I wonder if this is a test or a lot of nodes are actually getting disqualified.

1 Like

well many people seem to have the horrible idea that 99.9% uptime and 99.9% data reliability can be reached with consumer grade setups…

also the network got whipped, i suppose it might take some time to get into its stride…

but kinda looks like a test … maybe they found out it was a cheap way to do traffic testing… just up the spare pieces for each dataset… lol or they are scared they might loose data because they are realizing just how many people have like 8 internet connections and just having like 1 or 2 servers hosting nodes.

i mean if one does the math on that it gets scary quite quickly… like say if 5% of people had 10 connections, then they would equal about a 50% … making the total something like 150%
so 1/3 of all data could be on 5% of the SNO’s servers…

and because of the IP/24 data distribution system there would be no way to avoid or correct the issue… and they cannot shut them down because if they did, they might crash the network…

so the high number internet connection SNO’s would essentially holding the tardigrade network for ransom, without it being really official… but that’s just my horrible way of looking at it…

i hope it’s not as bad a problem as i think it might be…

I have to admit, the repair egress/ingress looks suspiciously high… we got more repair than customer+testdata ingress

why do you need 8-10 internet connections?

with the right ip/24 addresses one will get more data…
each address adds another multiple from the base…

so 8 internet connections will or can get 8 times the ingress and 8 times the egress… and thus if one doesn’t care about nothing else than being paid… then one can get 8 times the profit on the same server… atleast until it’s full…

like say you get 2 tb ingress pr month, then with 8 internet connections that could be 16tb ingress… so it adds up quick…

i’m confident storj will come after the people doing this, because they are essentially risking the data they are paid to keep safe… so doesn’t really make sense…
besides it’s only and advantage so long as nodes aren’t full
and with test data… as with customer data they would not have much of an advantage i think

thats not the idea of decentralized storage. If u just wanna trick the /24 go buy a VPN with port forwarding, thats all. They have more bandwidth than needed for storj.

1 Like

yeah thats a good point… but still people do this kind of stuff…

I am currently focused in this location. 374 nodes.


(map capture extracted from

1 Like

That map uses an estimation of location based on IP, which is notoriously unreliable. Sometimes you have all IPs of a single ISP show up on one location. That’s likely what you’re seeing here.

1 Like

Good day of egress again
Node 1:

Node 2:

1 Like

Thanks for clarifying obvious things for the masses. I selected this location for one reason.
A lookup of these IPs gave me further info :kissing_heart:

if one looks at france one also get the effect that most of the nodes are all collected in paris, which seems unlikely, it’s most likely just that their isp utilize certain methods to relay the data in their internal networks and then location data is lost when it hits the french isp’s, which unsurprisingly are generally located near paris… :smiley:

i will say tho… aside from that, then it seems pretty accurate… and we rule… there are more dots on there than i would have expected… ofc back when i started there were only like 1500 nodes listed… europa is really well covered… america and canada could use a bit more… and asia and austrailia… looks like storj totall forgot about them…

What interesting did you discover? Share your knowledge …