Opinion on current traffic?

I increased node storage a bit to see what is going on. Had ok egress last month so it might be worth it.
I am getting around 30 Gigabyte Ingress per day currently. Around half of it from Saltlake. So it doesn’t look like there is excessive testing going on right now.

I see a lot of concurrent uploads. I’ve got storage2.max-concurrent-requests set to 500 and still the limit is being hit constantly. But… that doesn’t mean there’s a lot of traffic. These connections are very slow.

I wonder how much the network is resistant against DoS due to slow connections?

my node has no concurrency limit, but not seeing any major issues with it. Averaging 18Mb up and 4Mb down in the last 24hrs.

i have had little to no ingress for like a month…
12-18gb daily ingress for this month…

and no limits on anything really… 99.9% successrates and 400mbit bandwidth, unlimited concurrent.
maybe it’s to do with geographical location…

now that i think about it and look closer at it… yeah 18mbit egress and 4mbit ingress sounds about right… i got them mixed around when i was looking at it… :smiley

One node (5.1/9TB stored) is seeing >30GB/d egress, but only about 1-2GB/d repair. 19-36GB/d ingress.
Other node (3.1/9TB stored) is seeing ~20GB/d egress, 10% being repair. 20-38GB/d ingress.

you can’t have those numbers tho… 20-38gb per day means that you either look on a time frame beyond this current month, or you are reading something wrong…

ingress is nearly perfectly evenly distributed between the global subnets of /24
my ingress pr day should be within a 1% margin of error as your ingress a day, if your system is running under optimal conditions.

what time frame are we looking at, because if that is your numbers for this month, then something with the fundamental mechanics of the data distribution has changed from since i investigated it some 2 months ago.

what was your numbers for yesterday

ingress only graph

 ./successrate.sh storagenode-2020-09-08.log
========== AUDIT ==============
Critically failed:     0
Critical Fail Rate:    0.000%
Recoverable failed:    0
Recoverable Fail Rate: 0.000%
Successful:            1085
Success Rate:          100.000%
========== DOWNLOAD ===========
Failed:                0
Fail Rate:             0.000%
Canceled:              16
Cancel Rate:           0.025%
Successful:            65129
Success Rate:          99.975%
========== UPLOAD =============
Rejected:              0
Acceptance Rate:       100.000%
---------- accepted -----------
Failed:                0
Fail Rate:             0.000%
Canceled:              10
Cancel Rate:           0.039%
Successful:            25540
Success Rate:          99.961%
========== REPAIR DOWNLOAD ====
Failed:                0
Fail Rate:             0.000%
Canceled:              0
Cancel Rate:           0.000%
Successful:            25903
Success Rate:          100.000%
========== REPAIR UPLOAD ======
Failed:                0
Fail Rate:             0.000%
Canceled:              0
Cancel Rate:           0.000%
Successful:            6194
Success Rate:          100.000%
========== DELETE =============
Failed:                0
Fail Rate:             0.000%
Successful:            42294
Success Rate:          100.000%

Guess it is not completely evenly distributed at the moment, I got 29GB ingress yesterday on 2 nodes on the same host:
grafik
grafik

@SGC give me a bit to build the detailed stats.

T1450-0000 Ingress Egress
date standard repair standard repair daily space
2020-09-01 17.53 18.86 34.58 1.4 137.47TBh
2020-09-02 12.36 7.19 35.57 0.814 131.37TBh
2020-09-03 18.22 9.93 33.79 1.06 132.6TBh
2020-09-04 16.08 10.74 31.93 1.31 138.5TBh
2020-09-05 12.49 19 32.46 1.78 138.14TBh
2020-09-06 11.25 14.38 33.71 0.946 126.73TBh
2020-09-07 11.29 25.2 32.13 3.11 120.14TBh
2020-09-08 10.74 18.97 32.92 3.25 145.39TBh
2020-09-09 6.1 10.53 20.9 1.85 39.49TBh

(placeholder node 2)

@kevink Its actually been oddly imbalanced between the two I have because they’re not only of 1mo of difference in age, but also amount stored, connection type, and standing latency.

well fork me sideways… i’m at exactly 50% of what your numbers are…

meaning there are two nodes on my allocated isp subnet now…

what am i going to do now… i only got 1 ip and was thinking about getting a few more so i could actually start using some storage capacity…

time to spin up that relay VPS so i can get into tons of subnets…

The chances of this happening are infinitely small… Seems like quite a conclusion to jump to.

1 Like

how else would you explain it…

we know from the bandwidth comparison thread that subnet gets very accurate distributed amounts of ingress… i don’t see how else one would explain it… also not a huge country and not to many isp’s on top of that most of the fiber companies are basically all the same company…

so doesn’t really seem like that far fetched… i didn’t think it could happen… but it looks … atleast to me like it did…

Would be nice to have a „nodes in your subnet counter“ in the node dashboard like it was already discussed some time ago.

1 Like

This was during a time of higher ingress and very well distributed ingress. At the moment is it very erratic. Sometimes my home node gets half the ingress of my proxied node, sometimes it gets twice the ingress of my proxied node. So unless both networks have a constantly changing amount of online nodes, the ingress is not predictable anymore.
Edit: at least the ingress rate is erratic. The overall ingress per day is a bit more stable but even these have sometimes differences of 25%.

1 Like

did a proper check of the numbers… i don’t know how else to intrepret this rather than i’m sharing the subnet with another node… its 1/2 all the way down for 9 out of 10 cases… and like you say… sometimes there will be some adjustments from the satellites to make sure the data distribution corrected… so i’m going to call it 50% of regular ingress … sure doesn’t seem random
and i have been pondering why my ingress has been low for a month…

day
1st = 17.7gb ingress
2nd = 9.47
3rd = 13,48
4rd = 13,68
5th = 15,5
6th = 12,60
7th = 18,15
8th = 14.4
9th = 11,82

tell me those numbers for each day aren’t pretty much half of what you are seeing

day node1 node2 (node2 is proxied)
1st 35GB
2nd 18
3rd 28
4th 27
5th 32 24
6th 26 26
7th 36 37
8th 22 29
9th 23 24

It’s indeed most of the time about half of my nodes… so even though ingress is more random now, it sure looks like you got a 2nd node on your subnet.
Guess you can only proxy or go to war by spinning up more nodes.

random is one thing… when one goes through the list of days and compares, then it’s obvious…
yay more work…its just ********* that each SNO affected by stuff like this or wanting to expand has to go and essentially just steal the ingress from the rest of the network…

but since it’s apparently fully allowed, and now the project behavior itself is actually pushing me towards setting up something similar just to make it run normally… ofc then comes the whole … we are not allowed to run multiple nodes on the same storage media, so for now i’m limited to a few if i don’t want to scrap my main pool…

atleast now i know what is wrong… might just call the isp and see if they will change it… to sort of patch it until i find a more permanent solution… even tho essentially there isn’t a end all solution… like in most things…

lol I run 3 nodes on the same drive, 2 are on the same network, 1 is proxied. As long as my drives don’t care, I don’t care either.

yeah it is a problem… might be 2 nodes in a radius of 10km but they share the same /24 subnet and you get only half the traffic. quite annoying when it hits yourself.

I don’t explain it, because I don’t know the cause yet. (And neither do you)

But here’s the thing, I’m seeing the same numbers you are seeing. Except I run 3 nodes. So according to your conclusion that means someone has started 3 nodes recently in my subnet that were all vetted in a really short time. This is obviously even more unlikely. So something else is going on.

I can’t explain why we’re seeing half the ingress traffic that others are. And by not jumping to a wrong conclusion, perhaps someone from Storj would actually find this curious enough to look into it.

Edit: Are you looking at pure normal ingress or ingress + repair ingress?

@SGC I’m sorry I forgot about the other node- get home, go absent minded (check)

5mo fiber node 5.78/9TB

T0115-0000 Ingress Egress
date standard repair standard repair daily space
2020-09-01 17.53 18.86 34.58 1.4 137.47TBh
2020-09-02 12.36 7.19 35.57 0.814 131.37TBh
2020-09-03 18.22 9.93 33.79 1.06 132.6TBh
2020-09-04 16.08 10.74 31.93 1.31 138.5TBh
2020-09-05 12.49 19 32.46 1.78 138.14TBh
2020-09-06 11.25 14.38 33.71 0.946 126.73TBh
2020-09-07 11.29 25.2 32.13 3.11 120.14TBh
2020-09-08 10.74 18.97 32.92 3.25 145.39TBh
2020-09-09 9.52 14.64 30.18 3.35 153.45TBh
2020-09-10 7.68 11.96 15.34 5.11 86.91TBh
2020-09-11 0.5 0.4 0.7 0.3 0TBh

4mo coax node 3.2/9TB

T0122-0000 Ingress Egress
date standard repair standard repair daily space
2020-09-01 17.64 18.86 20.34 0.1 72.64TBh
2020-09-02 12.51 7.02 20.33 0.06 69.74TBh
2020-09-03 18.35 10.18 19.32 0.1 71.03TBh
2020-09-04 16.88 11.14 17.53 0.1 73.47TBh
2020-09-05 13.39 18.88 19.3 0.1 74.27TBh
2020-09-06 12.11 14.08 19.25 0.1 69.38TBh
2020-09-07 12.05 25.13 19.81 0.3 66.03TBh
2020-09-08 10.6 19.29 19.69 0.2 78.63TBh
2020-09-09 9.69 14.7 18.95 0.2 83.07TBh
2020-09-10 7.64 12.03 10.13 0.3 44.54TBh
2020-09-11 0.5 0.4 0.6 0.01 0TBh