Bandwidth utilization comparison thread

Hi, i joined storj as node operator yesterday, lemme post what i got, obv i am still in vetting stage but from what i saw in other posts i got more traffic than normal in vetting stage (luckyly :slight_smile: ).

I’ll post the upcoming days.

2 Likes

I like your optimism with the 513 TB :smile:

7 Likes

it’s ingress alright, goes like this… customer uploads data and its split across 31 nodes or some number… i forget, but not important for the point i’m making.
then when people download their data you are paid for the downloaded data which is taken out of the pieces…

then time passes and some pieces initially generated and sent to the 31 nodes, this triggers the repair function, the repair function in the satellites then downloads the remaining pieces or the number required(repair egress).

these pieces are then used with the reedsolomon encoding or whatever it’s called, to general new pieces which contains the customers data.

these new generated pieces are then uploaded to nodes with capacity, this is repair ingress…

so really there isn’t any practical difference between regular ingress or repair ingress.
its still customer or test data that can be downloaded which is added to your node…

one could even argue that repair ingress since it most likely is older data then it could be more valuable because that the older test data is often downloaded more, so repair ingress could mean potentially higher egress.

also the odd’s of customers using tardigrade as backup needed their data will also go up with age, which again is about the same result… so i actually think i would prefer repair ingress.
but i doubt it really matters much.

I understand, but it is not network ingress/egress, like I’ve said - it is internal, or node ingress/egress if you’d like. No customer is uploading nor downloading data when repair traffic occurs.

It is like economics I guess, looking at the world economics and looking at your bank account are different.

As it stands now, network is experiencing almost no traffic compared to previous months, test or real. I assume more people are leaving than usual, or more new pieces are reaching the repair threshold, hence the increased repair traffic. Holidays, vacations/no-work, no-work planning I guess for the real traffic cause.

it’s spring, and people are itching to get out…
don’t really think it’s a real problem, there is always ebbs and flows in stuff like this…
also storjlabs seems to be doing some upkeep / updates / migrations and such… most likely because they expected this time to be a bit below normal.

also we have a ton of nodes on the network these days… so takes a lot of traffic to make an impact…
i mean we passed like 10k during like xmas… so those nodes are well integrated now.

so 1GB upload would be 3.3TB uploaded to the network when accounting for data expansion in the reed-solomon

not the same as youtube… but the network is still only 1 year old… will take some time before people trust it and test it… and the internet is growing at a phenomenal pace.

i forget when the square km array goes offline… but they expect that telescope to generate more data than the entire internet is today, every year or so… lol

the ingress will pick up and soon i would also suspect tardigrade will see a ton more customers, as people start to learn about the advantages, and as storjlabs gets everything streamlined.

just look at all the hype trains going on stuff like filecoin… it’s hilarious and they got shit… lol
storj actually has a fully operational network in its working form… but filecoin has friends at JPL or one of the co creators worked there or something like that… so it’s gotten all that old school silicon valley startup hype… lol

been trying to figure out if it’s actually a viable project… but its just so convoluted and over complicated …
and expensive to join and their storage miners actually went on strike lol because of poor payment…

now you can see people from other projects are starting to flood in looking for success with running storj nodes…
most likely an indication that it might just get crazy soon…

@f14stelt I don’t want to break your hopes, but realistically 500+TB will never fill up :sweat:

Have a look at the following estimator by copying it in your own Google account and filling in your numbers to have an idea on how long it takes to accumulate data:

Just so you know :slight_smile:

1 Like

Oh i know @Pac but thats not a problem, thats some spare storage i can use so its just to make something with it

1 Like

I don’t know how v2 stored 150 PB of data at a time when it was nothing close to the product that v3 is now and when it was much less known. Was that a gross amount? Even if it is, that’s still 25 PB in 2 years, v3 feels like it’s gaining adoption much more slowly judging by those numbers alone.

1 Like

Just some “spare storage” huh? That amount seems just ludicrous to me…

Anyways, good luck with that.

2 Likes

well if one wants to do exabyte storage, then one needs to know why v2 didn’t work and only way to do that if one doesn’t completely understand it, is to run an experiment… so it was most likely test data and i think they also did offer free storage for a while.

so i’m not sure it’s a real problem, but sure would be nice if the stored data was x6 … ofc at present that would mean my storage was full… and then some…

and really thinking back to when i was running a v2 node just before v3 was announced i think there was 1600 nodes…
you trying to tell me that 1600 nodes stored 150PB… seems like something is off in those numbers…
ofc i was away a long time and apparently v2 didn’t get shutdown like i thought it did…

even now we only have 12000 nodes, thats reasonable to store 150PB because then it would be like 12TB each which again seems unlikely since most nodes will be running of a single dedicated hdd and i doubt they are top tier all of them.

On another note i continued my vetting node on a vetted node ip experiment with some slight modification.
to summarize, from the 1st of the month to the 5th i ran two nodes which are nearly mirrors of each other with a normal deviation in ingess of less that 1%
(more like 0.25% when looking at the full month graph)

step 2 now that i had a baseline i removed a node being vetted on node 1
then after 3 days their deviation was 4% on the monthly total, in favor of the node being alone on the ip.
checking up i found my vetting node was already vetted on US2 a satellite with no data basically.

but still it would take a significant part of the vetted node ingress, because this wasn’t clearly proving the point i was trying to make, even tho for all practical purposes it does prove that vetting nodes on existing vetted nodes ip’s simply isn’t worth it…

alas realizing my experiment was mildly flawed i reset…
kinda… i put a brand new node on node1’s ip address

and removed the node that was vetted on US2 from node2’s ip addess.
this was done the 8th, so about 72 hours ago.
the start deviation between the nodes in favor of node1 was 2.9GB

node1 unvettted + vetted

nodes 2 - vetted alone

their deviation is now down to 2.13 GB and have been decreasing…
because i switch around the location of unvetted node from being on node2’s ip to node1’s ip

the vetting node does seem to get more ingress that the vetted node on the ip looses.
but we will see how long that lasts… thats pretty good ingress from day 1…
not sure what that is about.

unvetted node on node1’s ip, created 73 hours ago

for now the numbers seems to say it might be worth it… but still even at this 7.35GB ingress the 15% is basically just stolen from the older node… i know it’s not super accurate yet… should be more clear numbers in another 3 days… ingress is so dead right now.

Dude your numbers look great I have a nodes that goes like this all of them finished vetting some are older than others all are on different IP`s and different subnets. The numbers are valid to this day since beginning of this month

Slovakia-1
Ingress-53.4GB
Egress-7.22GB

Czehrepublic -1
Ingress-31.6GB
Egress-4.39GB

Czehrepublic -2
Ingress-25.4GB
Egress-2.6GB

Czehrepublic -3
Ingress-110.7GB
Egress-12.0GB

The only issue with “Czehrepublic -3” is that the Ingress is approximate to the deletion rate hence it isn’t growing and is stuck on 1TB, I’m troubleshooting it currently but No idea why it is happening because no audits fail uptime is nearly 100%.

Non the less your numbers look great

1 Like

i would say only Czehrepublic -3 has finished vetting
remember your nodes needs to vet on each individual satellite to get full ingress from them if they are pushing data.

but there isn’t anything unusual in vetting taking longer on satellites not pushing data… so you can have near 100% possible ingress for vetted nodes and then when the data switches and comes from a new satellite you drop down to 5% or whatever…

i got 4 month old nodes that yet hasn’t been vetted on all satellites but i think i’ve been making a mistake in leaving them on the same ip…
from what i can see it’s simply not worth it, they vet slower, they take ingress from the older nodes and all in all just a terrible idea.

Well I’m lucky As I Have access to multiple IP addresses on different subnets, Yes the CZ1 and CZ2 are new but they will tomorrow celebrate 2 months old, Is 2 Satellites haven’t fully vetted them As I suspect they aren’t pushing as much data,

To be honest I don’t mind that much what I need is that the CZ1 and CZ2 reach 5Tb each within 9 months or close to that so that they will be fully self sufficient. Because than they will be able to cover the costs of running them

yeah ingress does seem to be getting more and more scarce as the months go on… i think all the crypto hype might have created a influx of new SNO’s and then maybe then on top a few other factors.

hopefully it will atleast keep at the 500gb ingress pr month… thats not to bad…
and ofc then the deletions are a bit less… but i doubt we will be at 100% deletion to ingress amounts forever… has to be a phase
or an error in the storjlabs test data… i mean a node with 1tb data getting 500gb ingress and 500gb deletions… well thats basically just uploading data to delete it again…

but thats what tests are for… maybe they are optimizing the deletion methods.

i know there has been some issues with that in the past…

1 Like

Also take into account the new planned Reed Solomon scheme that is supposed to reduce the redundancy requirements. I read it that this will lead to less ingress and less required storage capacity.

2 Likes

i see satellite dishes in our future… or wait is it… extraterrestrial

anyways sounds kinda cool… will be interesting to see if the performance picks up…
sucks we get less data but thats all relative… less waste means more client data which means the current pay scheme can last longer…

so most likely not all bad.

1 Like

Usage/repair ratio appears to be returning back to normal… It was 1:5 at points, but now less than 1:2

For those that haven’t gotten updated to … v1.25.x and is still on v1.24.x
it seems that the ingress limit might have been turned on last night…

updated to v1.25.x can be seen pretty clear that it picks up again after the update.
duno exactly that it was the case, but was checking versions allowed a few days ago, and now my ingress drops to basically zero, i recheck the version minimum and it’s the v1.24 …
so not sure what else i am to think…

was kinda hoping to jump directly to v1.26.x
which is like a day or whatever from releasing on docker, been 16 days since the last one was released…

so maybe it’s just late and all this is automatic… but still kinda lame that one cannot skip an single version without getting punished.

anyways wanted to inform those of us who can only accept manual updates.

Given I have 11TB of space to take data, my recent ingress isn’t as good as @SGC

I think the fact you are vetted on Europe north, is making all the 20x difference