Bandwidth utilization comparison thread

i just don’t get this whole ip subnet thing… how many are we allowed to have… because thats really the main way to making better profits… twice the data for little extra expense… or 3 times… or 4 times… or 8 times… because if we can have multiple subnets thats really the only way to go… then the same hardware can make many multiple for the same money… and yeah it may only work for test data and until the nodes are full… but why would one ever want to run out of space, if the space earns itself back …

and like you are saying now… it will take you months to get back to 100% but with 4 connections you would get 400gb a day… so something like 12 tb a month… seems … i still got 4 months left for mine to fill… and by then i expect ill just upgrade it again…

well questions for monday :smiley:

1 Like

there are my numbers for the 17th, egress went up a bit but not much.
Disk space used still does some weird stuff, I don’t get how it can decrease from one day to the next even though I got 115GB of ingress…
That means I had more that 100GB worth of deletes which seems odd.

17Jul2020:
Node 1:

Node 2:

yeah but that goes against the reason they implemented subnet splitting in the first place if you just use different ISPs (or vpn tunnels) so you have different subnets at the same physical location. It is actually cheating the system but funny enough, I couldn’t find anything in the TOS that actually prohibits you from having/using multiple subnets at the same physical location (unless I missed that part somewhere).

1 Like


yesterday was a good day but I got 1.5GB egress from Asia, 1.5GB from us-central and 2.7GB from europe-west, so around 6GB customer traffic. From 33GB egress that’s at least 18% customer traffic. Customer ingress is 13GB so 11%.
So still a long way to go for storjlabs.

1 Like

They have around 245M STORJ tokens from the 2017 token sale so they could keep doing this for years. But there are other scenarios and risks summarized in the 2017 token sales terms (see Exhibit C).

2 Likes

sorry been a bit busy… i think we will move over to a weekly summery and ill figure out a script so the data is easily captured and transmitted for the next stage, so that we don’t burn out of all of this voodoo to collect information.

so far tho it’s been highly useful to have all this information in a fairly easy to access way…

I’m throwing together a google sheet so that everyone can enter their numbers.
I’ll have to give edit rights to everyone though so i hope the spreadsheet doesn’t turn into a mess haha
I’ll Make a post explaining the thing and how to enter the results.

Alright so here is the link to the spreadsheet:


I put my numbers for yesterday as an example, if there aren’t any empty lines under everyone else’s just right click on one of the cells and click on “Insert a line”. If enough people put in their numbers I might try to make some graphs but that might be challenging.
Anyway let me know what you think and if the link works !
1 Like

I can see nobody has replied to you but i’m noticing a similar issue this month. The ‘Disk Space Used This Month’ is fluctuating daily and is now quite rapid, but without the corresponding ingress or data deletions.

That’s because the source data used for that is infrequently updated. So some days are missing storage used that are then compensated the next day.

1 Like

Thanks. That makes sense.

Put my numbers in.
But today my secondary node got vetted on europe-north so started hoging the network lol… so tomorrow my ingress will fall, we’ll see how it behaves with egress
image

Just to understand you calculate “Average Disk Space Used [TB]” by taking daily “Disk Space Used [TBh]” which is TB*h and dividing by 24?

Exactly, it’s not very accurate but at least it doesn’t change depending on when you upload your numbers.

Could you add the disk space used ?

again those graphs are wildly unreliable as demonstrated here…
basically useless filler on the dashboard for now…

I don’t quite understand… you convert the daily TB*/h figure to an hourly amount (divide by 24) but then you use the daily egress to calculate a daily percentage based on hourly TB*h?

Wow yours varies quite a bit ! I can change it to “total disk space used”