Bandwidth utilization comparison thread

yeah i got a 223gb ingress total for the month thus far… and not much downtime aside from a few hours
and a lot of reboots because of new hardware trouble.
so thats like … 9 maybe or so a day, and thus not far from half of what your avg is …

i got 14.3gb avg for 21th,22th ,23 th

hmmm maybe i need to check if the other node is gone… my ingress have gone up these last few days.

Actually, running much nodes under one ip doesn’t makes sense on total 1ip profit, but same nodes be sharing different ip should make much more profit.
Thats results of one of last nodes i’ve started (running 5th month)


And this is my first node result (it’s on 13th month)

So feel the difference, a little bit more than 1$ per month, or at about 30$ monthly from 1 node. And most profitable monthes on firss node was 4-6, than i started appending more nodes.
Actual profit is near 60-70$ per month, and it still profitable. My ethernet connection is 300/300 and with electrisity it costs 14$ monthly. And whole system costs around 230$ for me, so it will pay off soon. (this month)

yeah the only good reason to run multiple nodes would be to distribute load across multiple disks or to make best use of hdd capacity.

the subnet thing is because i’m not running both nodes on this subnet… thus i currently split with whoever the other guy is… because ingress is allocated by subnet, but ill be switching my ip in the near future, just want to maybe wait and give myself a bit more headroom to migrate my storagenode to a different zfs setup… and only has 4tb excess space to actually make a migration of my node so half the ingress isn’t to bad until i’ve finished my migration.

but yeah the number of nodes wouldn’t matter within a subnet… so long as one is the only user on that subnet… i just got unlucky

I’m MTD 471GB in ingress with 588GB in egress with a slow as dirt increase of storage finally up to 5.91TB today due to the low ingress.

i suspect ingress will be increasing a lot in the near future, storj got scared of loosing data and set the network to repair to a super high level, which has most likely taken all the bandwidth allocated for ingress…

in the past for a good while we have been seeing 100-250gb ingress a day for extended periods, so hopefully we will be seeing numbers like that soon again.

and yeah egress seems to be around the x1.something or so in relation daily ingress, but egress is a bit random… so not easy to give exact numbers for that… hopefully we will see some big customers set up business on the network soon…

The back half of July and all of August was a bit high for an extended period- that helped

I suspect that this might also play a bit of a part too:

Just riffing here- but I wonder if one of the other Sat’s are used by this, specifically europe-north as we’re talking a 10x decrease in traffic on the 10th.

i think europa north is the new test data satellite, which replaces the now decommissioned / being refurbished stefan-benten satellite…

or thats how i remember it… the developers even talked about doing this, it just sadly impacted the ingress for a time… this is a big network and still very much in the start up phase…

i doubt we can really feel any real world traffic atm… simply to many nodes… i mean there are like 6k nodes if not many many more… so just giving 1mb/s would be a 50gbit internet connection at max utilization… so more like 60gbit when overhead is included and a bit of head room, and then thats just the minimum required…
that’s a SBEEPload of traffic, thats like what comes out of top tier telescopes.

so before storj lands some whale of a client like that then we won’t see anything of note aside from test data… but the enterprise customers should soon be done testing and then hopefully some of them will move in…

What script do you use for this?

You are about dashboard? Or about something else?

The green and black report on multiple nodes. Would love that for my nodes.

That’s part of bigger dashboard, you can read more here

It’s look like that

And there are 2 dashboards avalible, so i recommend use Boom-Table from Pull Request.

2 Likes

ahhh okay i’ve been meaning to set that up for a bit now. I’ll have to get on it. Thanks, and happy Storj’ing.

I got curious about

… So I remoted into the larger node and ran the Storj3Monitor powershell script to see what it had to say about repair over days and what the ingress vs egress per SAT was and came up with this for this afternoon:

S A T E L L I T E S   B A N D W I D T H
Legenda:
        i       -ingress
        e       -egress
        =       -pips from all bandwidth
        -       -pips from bandwidth of maximum node, or simple percent line
        * n     -down line supressed n times

satellite.stefan-benten.de:7777 (118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW)
Y-axis from 0 to 56.3 MiB; cell = 5.63 MiB; 1 nodes
│i
│i
│i
│i  i
│ie i
│ie i  i
│ie i  i
│ie i  i
│ie ie ie
└───────── * 1
 01 02 03
 - egress max 34.4 MiB (2020-09-01), average 22.1 MiB
 - ingress max 56.3 MiB (2020-09-01), average 42.3 MiB
 - bandwidth max 90.6 MiB (2020-09-01), average 64.4 MiB
 - bandwidth total 66.3 MiB egress, 127 MiB ingress


Disk space used this month
Y-axis from 0 to 18.3 GBh; cell = 1.83 GBh;  nodes
│                                                                                          --
│                                                                                          --
│                                                                                          --
│                                                                                          --
│                                                                                          --
│                                                                                          --
│--                                                                                        --
│--                                                                                        --
│--                                                                                        --
│--                                                                                        --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 25.7 GBh

asia-east-1.tardigrade.io:7777 (121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6)
Y-axis from 0 to 5.46 GiB; cell = 559 MiB; 1 nodes
│i
│i
│i     i  i
│i     i  i
│i  i  i  i  i  i
│i  i  i  i  i  i  i  i
│i  i  i  i  i  i  i  i  i
│i  ie i  i  ie ie ie ie i  i  i  ie     e     e        e  e  e
│ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie  e i
│ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie i  i
└───────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 1.9 GiB (2020-09-06), average 1.51 GiB
 - ingress max 5.46 GiB (2020-09-01), average 2.27 GiB
 - bandwidth max 7.09 GiB (2020-09-01), average 3.77 GiB
 - bandwidth total 37.6 GiB egress, 56.7 GiB ingress


Disk space used this month
Y-axis from 0 to 9.05 TBh; cell = 905 GBh;  nodes
│                                          --
│   --    -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 209 TBh

us-central-1.tardigrade.io:7777 (12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S)
Y-axis from 0 to 5.86 GiB; cell = 600 MiB; 1 nodes
│      i
│      i
│i     i  i
│i  i  i  i
│i  i  i  i  i     i
│i  i  i  i  i  i  i  i  i
│i  i  i  i  i  i  i  i  i  i
│ie i  ie i  i  i  i  i  i  ie ie ie ie ie ie ie ie ie  e ie  e ie
│ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie i  i
└─────────────────────────────────────────────────────────────────────────── * 1
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 2.03 GiB (2020-09-17), average 1.67 GiB
 - ingress max 5.86 GiB (2020-09-03), average 2.61 GiB
 - bandwidth max 7.78 GiB (2020-09-03), average 4.28 GiB
 - bandwidth total 41.7 GiB egress, 65.3 GiB ingress


Disk space used this month
Y-axis from 0 to 10.1 TBh; cell = 1.01 TBh;  nodes
│                                                         --
│      --       -- --       -- -- -- --    -- -- -- -- -- -- -- -- --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 229 TBh

europe-west-1.tardigrade.io:7777 (12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs)
Y-axis from 0 to 6.11 GiB; cell = 626 MiB; 1 nodes
│      i
│i     i
│i     i  i
│i     i  i
│i  i  i  i  i                        e  e  e  e  e
│ie ie i  i  ie ie ie ie i      e  e  e  e  e  e  e  e  e  e  e  e
│ie ie ie i  ie ie ie ie ie i   e  e  e  e  e  e  e  e  e  e  e  e  e
│ie ie ie i  ie ie ie ie ie i   e ie ie ie  e  e  e ie  e  e  e  e  e
│ie ie ie ie ie ie ie ie ie i  ie ie ie ie ie ie ie ie ie ie ie ie  e    i
│ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie i  i
└───────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 3.97 GiB (2020-09-15), average 2.97 GiB
 - ingress max 6.11 GiB (2020-09-03), average 2.64 GiB
 - bandwidth max 9.01 GiB (2020-09-03), average 5.61 GiB
 - bandwidth total 74.2 GiB egress, 66 GiB ingress


Disk space used this month
Y-axis from 0 to 12.3 TBh; cell = 1.23 TBh;  nodes
│                                    --
│                  --    --    --    --    -- --    -- -- --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 271 TBh

europe-north-1.tardigrade.io:7777 (12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB)
Y-axis from 0 to 16.5 GiB; cell = 1.65 GiB; 1 nodes
│    e
│ e  e  e  e     e  e  e
│ e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e                                            e
│ e  e  e  e  e  e  e  e  e                                            e  e
│ e  e  e  e  e  e  e  e  e                                            e  e
│ e  e  e  e  e  e  e  e  e                                            e  e
│ e  e  e  e  e  e  e  e  e                                            e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e  e  e  e                 e  e  e
└───────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 16.5 GiB (2020-09-02), average 7.33 GiB
 - ingress max 71.8 MiB (2020-09-01), average 44.9 MiB
 - bandwidth max 16.5 GiB (2020-09-02), average 7.37 GiB
 - bandwidth total 183 GiB egress, 1.1 GiB ingress


Disk space used this month
Y-axis from 0 to 106 TBh; cell = 10.6 TBh;  nodes
│                     --
│                  -- --    --       --    --    --
│-- -- -- --       -- --    --    -- --    -- -- --    -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 2.22 PBh

saltlake.tardigrade.io:7777 (1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE)
Y-axis from 0 to 10 GiB; cell = 1 GiB; 1 nodes
│    e
│ e  e  e        e     e
│ e  e  e  e  e  e  e  e  e        e
│ e  e  e  e  e  e  e  e  e  e     e  e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e        e     e  e  e     e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e        e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e        e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e        e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e  e  e  e  e  e     e  e  e  e  e  e  e  e  e  e
└───────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 10 GiB (2020-09-02), average 6.59 GiB
 - ingress max 41 MiB (2020-09-18), average 24 MiB
 - bandwidth max 10 GiB (2020-09-02), average 6.62 GiB
 - bandwidth total 165 GiB egress, 600 MiB ingress


Disk space used this month
Y-axis from 0 to 21.1 TBh; cell = 2.11 TBh;  nodes
│                                                         --
│                     -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
│   --    -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 469 TBh


All Satellites
Disk space used this month
Y-axis from 0 to 153 TBh; cell = 15.3 TBh; 1 nodes
│                     --
│      -- --       -- --    --    -- --    --    --    -- --    --
│-- -- -- -- --    -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                      --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
│-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --                   --
└─────────────────────────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
 - total 3.4 PBh


S A T E L L I T E S    D E T A I L S

Satellite                Node    Ingress                Egress                 Audit UptimeF Comment
---------                ----    -------                ------                 ----- ------- -------
stefan-benten (118U-AW)  1ML4-dG [          ]  127 MiB  [          ]  66.3 MiB   100       0 -
asia-east-1 (121R-A6)    1ML4-dG [===       ]  56.7 GiB [=         ]  37.6 GiB   100       0 -
us-central-1 (12Ea-3S)   1ML4-dG [===       ]  65.3 GiB [=         ]  41.7 GiB   100       0 -
europe-west-1 (12L9-Ds)  1ML4-dG [===       ]  66 GiB   [=         ]  74.2 GiB   100       0 -
europe-north-1 (12rf-mB) 1ML4-dG [          ]  1.1 GiB  [====      ]  183 GiB    100       0 -
saltlake (1wFT-GE)       1ML4-dG [          ]  600 MiB  [===       ]  165 GiB    100       0 -



Repair by days
Y-axis from 0 to 23.5 GiB; cell = 1.56 GiB; 1 nodes
│                  i
│                  i
│                  i
│                  i
│i           i     i  i
│i           i     i  i
│i           i     i  i                    i
│i           i  i  i  i  i                 i
│i           i  i  i  i  i  i              i
│i        i  i  i  i  i  i  i  i           i  i
│i     i  i  i  i  i  i  i  i  i  i  i  i  i  i  i
│i  i  i  i  i  i  i  i  i  i  i  i  i  i  i  i  i        i  i  i  i
│i  i  i  i  i  i  i  i  i  ie i  i  i  i  i  i  i  i  i  i  i  ie i  i  i
│i  i  i  i  i  i  i  i  i  ie ie i  ie i  i  ie i  i  i  i  i  ie ie i  i
│i  i  i  i  ie i  ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie
└───────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 4.85 GiB (2020-09-22), average 2.66 GiB
 - ingress max 23.5 GiB (2020-09-07), average 10.4 GiB
 - bandwidth max 26.4 GiB (2020-09-07), average 13 GiB
 - bandwidth total 66.4 GiB egress, 259 GiB ingress


Traffic by days
Y-axis from 0 to 33.1 GiB; cell = 2.21 GiB; 1 nodes
│
│ e  e  e        e
│ e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e
│ e  e  e  e  e  e  e  e  e        e
│ie  e ie  e  e  e  e  e  e     e  e  e  e                             e
│ie  e ie ie  e  e  e  e  e  e  e  e  e  e        e                    e  e
│ie ie ie ie ie  e  e  e  e  e  e  e  e  e        e  e  e  e  e  e  e  e  e
│ie ie ie ie ie ie ie ie ie  e  e  e  e  e  e  e  e  e  e  e  e  e  e  e  e
│ie ie ie ie ie ie ie ie ie ie  e  e  e  e  e  e  e  e  e  e  e  e  e  e  e
│ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie  e  e  e
│ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie ie
└───────────────────────────────────────────────────────────────────────────
 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
 - egress max 33.1 GiB (2020-09-02), average 20.1 GiB
 - ingress max 17 GiB (2020-09-03), average 7.59 GiB
 - bandwidth max 48.5 GiB (2020-09-01), average 27.7 GiB
 - bandwidth total 502 GiB egress, 190 GiB ingress

well it might not look like it… but it’s mentioned here, apparently it went from 35 to 52
the repair threshold, which i had to figure out it was called lol took a bit…

basically its just maintenance to ensure against long term issues, and thus the network will most likely return to normal operation afterwards, which is usually much higher ingress

hopefully it will take another week’s time or so, then i can have time to migrate to a new zfs setup, just ordered 3 additional hdd’s so i can get my pool into a 3x raidz1 consisting of 4 drives each, and thus optimizing my redundancy losses quite a bit… running raidz1 with 3 drives now… so going to 4 with give me a 50% boost in my capacity. actually a bit more because i got a couple of 6tb drives paired with a 3tb so those i don’t even get full utilization off…

but it was an easy and good way to ensure i have plenty of raw iops… thus allowing my system to run great.

I think specifically yesterday (2020-09-25) they really pumped some bytes for that repair change- today (2020-09-26) has been high for repair so far too:


image

1 Like

I think this could have also had something to do with the uptick in repair traffic:

Graph shows a total loss of 588 nodes since 24Sep…although that may not truly be the case as the values in the graph as per the node on the page, it says that a node is considered active if it was reachable within the last 24 hours…so some nodes may have just been offline for a day or two…

there are also people gaming the system and exploiting it where ever they can
so maybe they keeled off a few of those.

on a positive node, i manage to get the other guy off “my” subnet
took a week or two from when i discovered he was there… but adding a 2nd node put him over the edge… had been working on doing that for a while…

but finally got docker working inside a container with the ability to directly access to the host filesystem
ofc when i added a 3rd node to the shared subnet he noticed the significant drop in his new node ingress… and came to the forum :smiley: where i had been keeping an eye out…

so back to full bandwidth here i hope, for a good long time… tho i will say it’s a good reason for using the default port and around that range like say 2896x and x being 1 - 9
does avoid unseen collisions of nodes… because people can just scan the subnet, its only 255 ip’s so pretty easy

hopefully the subnet i’m on will be filled soon and then since it’s all static ip addresses, then few new systems / users will enter it… when its full… but right now i think it’s one of the subnets if not the subnet, every new leasing of a static ip address users will end up for the isp i’m at.

which is also kinda why i didn’t want to move… imagine i got into a new subnet that was all empty… then thats 250 odd systems each one with the chance of being a storj node, while right now i’m guessing there maybe like 30 spots left in this subnet i’ve been in for like … years
so when thats full, everybody new will be bundled in other subnets and i should be almost 100% immune to getting new nodes on my subnet unless if one of the 250 users wants to be come a storj host, and lets be fair…

10% is like ip cam’s, then there will be atleast 10-20% webhosting(which may potentially be future storj users), and then there will be some users that want to remote access their network due to travel or such… but i don’t see why one would want to use a ip and not like a ddns for that, but lets call that 10% also… lets put a 20-30% for random stuff people want to directly access remotely through the internet…
greenhouse control systems, home alarms, cars, home lighting, gararge doors, you get the idea…

and when we think of all those… then maybe we have like 10-30% on a subnet thats potentially users that may consider even starting up hosting storj.
and then out of those 10-30% lets say 1/5 end up doing that and not other projects, which imo is pretty high… should be more like 1/20 or 1/10 so that would be 2-6 % chance of somebody else starting a node… out of 250 or so…

ofc the math is completely different if we imagine a new storj users on the same isp, requesting and ip address and then this subnet is the default subnet people are put on, until its full…

that gives me more like a 100% chance for every remaining ip address if they are a storj user xD

so well long rant, but quite pleased that i managed to avoid moving subnet, and hope that soon the odds of me getting to share a subnet will be reduced to well like we did the math maybe 2-6% yearly + the odds for the systems leaving an possible new systems entering the subnet…

but stuff like ipcams, home alarms, sprinklers, cars, garage door ip’s don’t really change that much…

i should go do something productive, instead of sitting here ranting…

1 Like

@kalloritis

yeah sure did make a good spike

for a second i just through omg it was because the other guy left my subnet… but yeah not so much…
atleast he only affected ingress… and thats been pretty quite… so not much harm done.

somebody should really fix the image viewer… or maybe its just my crappy browser that will soon get the final X of death, i cannot zoom enough to see the image… i can enlarge it but it’s not enough… ofc also a crappy screen this… i’m sure if i had state of the art gear it might not be a problem

and i can ofc download it… to view it correctly… but why should i have to do that round robin every time i want to view an image that is a bit out of proportions… just cooks me bacon

I thought of something similar a few weeks ago, to minimize the chances of another storj node landing in the same subnet one could simply spin up multiple 500GB nodes on his server instead of one big note. That way the newcomer would have such a little slice of the traffic that he would have to change subnet if he wants his node to fill up at a reasonable speed.
Not a very nice strategy, it’s kind of cheating the system, but it should work pretty well.
At least until storj decide to change how they handle same nodes on the same subnet.

1 Like

where did you get this graph from?