Bandwidth utilization comparison thread

15Aug2020:
Node 1:

Node 2:

1 Like

Still seeing those super high repair egress, vs June/July, on the slightly older node, and a good bit fuller- but still odd that the month of August has been flowing a lot of repair egress.

Fiber node (4mo)

Coax node (3mo)

1 Like

Looks like egress is increasing significantly, still very low ingress, I’m wondering when they’ll resume testing.

2 Likes

17Aug2020:
Node 1:

Node 2:

3 Likes

Are we still doing this?

18Aug2020:
Node 1:

Node 2:

2 Likes

seems like the thread is starting to die down.
We did prove that ingress is almost exactly the same for all nodes. As for egress that will highly depend on the age of the node.

2 Likes

Not sure if age is the only factor. I think that the more data your node has, the higher chance of getting egress.
You can have 6 month old 2TB of capacity node that is fully filled. But I doubt that you get more egress than 3 month 6TB node and I would hypothesise that sometimes it can have higher egress, but higher capacity will prevail? :slight_smile:

My data for this month, egress is creeping up. Now it is at levels that are quite okey-ish for “per TB” basis.
Although small amount of stored data is not helping…
And what is interesting - log monitoring shows that there is a handful of errors stating that some files are not found… (little unnerving I would say)
Also an observation - as my secondary node gets more and more vetted, the ingress gets lower for primary node (secondary node is evening the ingress more and more)
Summing up ingress and egress for both nodes gives similar results as you guys posted above.
I will try to nerf secondary node free space to see if ingress can be rebalanced.

(blue line is ingress (and should be horizontal/creeping up, not going down), red line egress)

Date IngressT EgressT [GB] StoredT [GB] egress ‰ egressT EgressT [kb/s /TB]
02.08.2020 13.3 4.32 1.97 2.19 50.03 25.39
03.08.2020 11.62 3.27 1.98 1.65 37.86 19.12
04.08.2020 19.41 5.96 1.99 2.99 68.98 34.66
05.08.2020 32.14 7.29 2.03 3.59 84.39 41.57
06.08.2020 43.68 6.71 2.06 3.26 77.66 37.7
07.08.2020 49.5 7.8 2.1 3.71 90.23 42.97
08.08.2020 47.37 4.52 2.15 2.1 52.29 24.32
09.08.2020 38.48 7.68 2.17 3.54 88.91 40.97
10.08.2020 39.98 8.38 2.21 3.79 96.98 43.88
11.08.2020 40.28 8.7 2.24 3.88 100.65 44.93
12.08.2020 36.84 9.64 2.27 4.25 111.58 49.15
13.08.2020 35.6 11.84 2.3 5.15 137.06 59.59
14.08.2020 20.09 6.77 2.32 2.92 78.39 33.79
15.08.2020 30.71 15.2 2.34 6.5 175.98 75.2
16.08.2020 27.06 16.17 2.37 6.82 187.18 78.98
17.08.2020 22.69 15.24 2.38 6.4 176.41 74.12
18.08.2020 20.82 16.39 2.4 6.83 189.71 79.04
1 Like

Yes of course, I should have precised that I was talking about nodes with the same amount of data stored.

I started a second node almost two months ago and now that it’s fully vetted (except for stefan-benten) the ingress is almost perfectly split between the two nodes (the most I saw was a 7% difference).
Also my egress permille is almost exactly the same as yours over this period of time.

2 Likes

i think the only path forward is to have some script extract and compile the data for like say 1month at a time… so its nice as each to post / share / compare

seeing some pretty good numbers these days tho

4 Likes

WOW !!! 155GB!!! egress for ONE day!!! OMG!!! :star_struck: :partying_face:

1 Like

in linux it can be done through storagenode API and curl - there are some curl commands posted on the forum already but I am not that good in scripting.
And for windows it could probably be done in python or java, c#…

1 Like

yeah i might do a linux version, but like you… i’m also not very good at scripting… so it’s kinda taken a backseat for now… but eventually ill get around to it if nobody else ends up getting it done.
been having a bit of a heat wave so not been around that much…

3 Likes

and yet another record day for my node…in egress obviously…
seems like we might be very short on ingress this month… anyone know why?

2 Likes

I have my theories, but I’d rather not speculate out loud. I’m just happy we’re seeing so much egress.

4 Likes

…repairing the network before increasing its size? :wink:

3 Likes

Let us participate :wink:

1 Like

not that it really answers your question…

1 Like

Hi!
I’m back… LOL
Just had to resync my raid for 30 hours because of UPS test that shut down my computer unexpectedly… fortunately node was offline. nevertheless it took 30 hours to resync RAID…
I tried to run node during resync but system was too unresponsive… resulting in errors that DBs are locked…
Hope it will get on track… but noticing quite low traffic right now… or is it an effect of my computer being offline?

1 Like

egress at about 5mbps for 7TB node, ingress almost non-existant.

2 Likes

due to using my total server bandwidth graph my numbers isn’t crazy accurate… but i would get around 10mbit egress on 14tb and ingress… well what does 15-16gb in a day equal in kbytes/s :smiley:
nada, zero, zip

did start spinning up my vm’s a few days ago… seemed to affect the egress… but never easy to tell… i don’t seem to get many cancelled… so atleast in theory it shouldn’t…

tried to measure my bandwidth… but didn’t get my usual bandwidth… and i’m only using a few tens of megabits, not more than 100mbit atleast… ofc the traffic could be going somewhere else in the network… i should finish taking control of the network, ofc then the first time my server is down ill never hear the end of it… lol

well that egress is vanning could mean they are getting close to being done reinforcing the data integrity, thus we should get to see some good test data ingress soon since this entire month there wasn’t time for it…

weird that the townhall banner is gone on my browser atleast…
i asked if there would be coming some clarification on if we are allowed to have multiple ip addresses and how exactly the rules for that will work… :smiley: because well the easiest way to make better profits is simply to get another ip… which is … well terrifying and cool on the same time… seems very pro to have two internet connections xD

@shoofar but yeah you shouldn’t have any long term effects from a little dt not now and not in the future… downtime happens… we just have to keep a fairly good HA level
something like a day a year, but thats not active yet… and still going past it one year shouldn’t be to bad if one can keep it up the next few years or the previous… it’s about keeping a stable system accidents will happen and some of the good or best nodes will end up having a week of dt most likely without preventive measure they could have taken…

you can only predict so much…

2 Likes