Bandwidth utilization comparison thread

Oh i know @Pac but thats not a problem, thats some spare storage i can use so its just to make something with it

1 Like

I don’t know how v2 stored 150 PB of data at a time when it was nothing close to the product that v3 is now and when it was much less known. Was that a gross amount? Even if it is, that’s still 25 PB in 2 years, v3 feels like it’s gaining adoption much more slowly judging by those numbers alone.

1 Like

Just some “spare storage” huh? That amount seems just ludicrous to me…

Anyways, good luck with that.

2 Likes

well if one wants to do exabyte storage, then one needs to know why v2 didn’t work and only way to do that if one doesn’t completely understand it, is to run an experiment… so it was most likely test data and i think they also did offer free storage for a while.

so i’m not sure it’s a real problem, but sure would be nice if the stored data was x6 … ofc at present that would mean my storage was full… and then some…

and really thinking back to when i was running a v2 node just before v3 was announced i think there was 1600 nodes…
you trying to tell me that 1600 nodes stored 150PB… seems like something is off in those numbers…
ofc i was away a long time and apparently v2 didn’t get shutdown like i thought it did…

even now we only have 12000 nodes, thats reasonable to store 150PB because then it would be like 12TB each which again seems unlikely since most nodes will be running of a single dedicated hdd and i doubt they are top tier all of them.

On another note i continued my vetting node on a vetted node ip experiment with some slight modification.
to summarize, from the 1st of the month to the 5th i ran two nodes which are nearly mirrors of each other with a normal deviation in ingess of less that 1%
(more like 0.25% when looking at the full month graph)

step 2 now that i had a baseline i removed a node being vetted on node 1
then after 3 days their deviation was 4% on the monthly total, in favor of the node being alone on the ip.
checking up i found my vetting node was already vetted on US2 a satellite with no data basically.

but still it would take a significant part of the vetted node ingress, because this wasn’t clearly proving the point i was trying to make, even tho for all practical purposes it does prove that vetting nodes on existing vetted nodes ip’s simply isn’t worth it…

alas realizing my experiment was mildly flawed i reset…
kinda… i put a brand new node on node1’s ip address

and removed the node that was vetted on US2 from node2’s ip addess.
this was done the 8th, so about 72 hours ago.
the start deviation between the nodes in favor of node1 was 2.9GB

node1 unvettted + vetted

nodes 2 - vetted alone

their deviation is now down to 2.13 GB and have been decreasing…
because i switch around the location of unvetted node from being on node2’s ip to node1’s ip

the vetting node does seem to get more ingress that the vetted node on the ip looses.
but we will see how long that lasts… thats pretty good ingress from day 1…
not sure what that is about.

unvetted node on node1’s ip, created 73 hours ago

for now the numbers seems to say it might be worth it… but still even at this 7.35GB ingress the 15% is basically just stolen from the older node… i know it’s not super accurate yet… should be more clear numbers in another 3 days… ingress is so dead right now.

Dude your numbers look great I have a nodes that goes like this all of them finished vetting some are older than others all are on different IP`s and different subnets. The numbers are valid to this day since beginning of this month

Slovakia-1
Ingress-53.4GB
Egress-7.22GB

Czehrepublic -1
Ingress-31.6GB
Egress-4.39GB

Czehrepublic -2
Ingress-25.4GB
Egress-2.6GB

Czehrepublic -3
Ingress-110.7GB
Egress-12.0GB

The only issue with “Czehrepublic -3” is that the Ingress is approximate to the deletion rate hence it isn’t growing and is stuck on 1TB, I’m troubleshooting it currently but No idea why it is happening because no audits fail uptime is nearly 100%.

Non the less your numbers look great

1 Like

i would say only Czehrepublic -3 has finished vetting
remember your nodes needs to vet on each individual satellite to get full ingress from them if they are pushing data.

but there isn’t anything unusual in vetting taking longer on satellites not pushing data… so you can have near 100% possible ingress for vetted nodes and then when the data switches and comes from a new satellite you drop down to 5% or whatever…

i got 4 month old nodes that yet hasn’t been vetted on all satellites but i think i’ve been making a mistake in leaving them on the same ip…
from what i can see it’s simply not worth it, they vet slower, they take ingress from the older nodes and all in all just a terrible idea.

Well I’m lucky As I Have access to multiple IP addresses on different subnets, Yes the CZ1 and CZ2 are new but they will tomorrow celebrate 2 months old, Is 2 Satellites haven’t fully vetted them As I suspect they aren’t pushing as much data,

To be honest I don’t mind that much what I need is that the CZ1 and CZ2 reach 5Tb each within 9 months or close to that so that they will be fully self sufficient. Because than they will be able to cover the costs of running them

yeah ingress does seem to be getting more and more scarce as the months go on… i think all the crypto hype might have created a influx of new SNO’s and then maybe then on top a few other factors.

hopefully it will atleast keep at the 500gb ingress pr month… thats not to bad…
and ofc then the deletions are a bit less… but i doubt we will be at 100% deletion to ingress amounts forever… has to be a phase
or an error in the storjlabs test data… i mean a node with 1tb data getting 500gb ingress and 500gb deletions… well thats basically just uploading data to delete it again…

but thats what tests are for… maybe they are optimizing the deletion methods.

i know there has been some issues with that in the past…

1 Like

Also take into account the new planned Reed Solomon scheme that is supposed to reduce the redundancy requirements. I read it that this will lead to less ingress and less required storage capacity.

2 Likes

i see satellite dishes in our future… or wait is it… extraterrestrial

anyways sounds kinda cool… will be interesting to see if the performance picks up…
sucks we get less data but thats all relative… less waste means more client data which means the current pay scheme can last longer…

so most likely not all bad.

1 Like

Usage/repair ratio appears to be returning back to normal… It was 1:5 at points, but now less than 1:2

For those that haven’t gotten updated to … v1.25.x and is still on v1.24.x
it seems that the ingress limit might have been turned on last night…

updated to v1.25.x can be seen pretty clear that it picks up again after the update.
duno exactly that it was the case, but was checking versions allowed a few days ago, and now my ingress drops to basically zero, i recheck the version minimum and it’s the v1.24 …
so not sure what else i am to think…

was kinda hoping to jump directly to v1.26.x
which is like a day or whatever from releasing on docker, been 16 days since the last one was released…

so maybe it’s just late and all this is automatic… but still kinda lame that one cannot skip an single version without getting punished.

anyways wanted to inform those of us who can only accept manual updates.

Given I have 11TB of space to take data, my recent ingress isn’t as good as @SGC

I think the fact you are vetted on Europe north, is making all the 20x difference

most likely your node isn’t fully vetted yet on the satellites currently pushing data.
this node is about 13-14 months, i think any node younger than 5-6 months doesn’t seem to be vetted on the active satellites.

but with the current ingress being pushed from those until recently inactive or nearly inactive satellites, will result in nodes actually being vetted for them in much shorter time.
and like before it only needs to be done once.

and my node is also running on 1gbit synchronous fiber, 1tb ssd read cache and write cache and with multiple hdd’s, running on a server, so it’s near the peak of what one would expect, ofc with the low traffic these days, almost no systems are stressed, so not really any advantage presently.

There is still hope…

1 Like

Yes, i am still waiting for vetting on Europe north… Can @SGC share your Europe north ingress? Cheers

the end of 12th and beginning of13th is basically a bust, due to me just well 2hr and 53m ago updated to v1.25
on top of that i was about 10% behind for the first 5 days of the month due to me experimenting with vetting nodes on the same ip, but that was also a bust/bad idea, so the node is now alone on the ip.
so some can have 5-10% more than this…
image

LOL… Given all your testing and you are still 35x more… I thought a vetted node was only 20x more than an unvetted node, not 35x!! Thanks

doesn’t exactly work like that…
unvetted nodes get 5% of the total ingress to the network, which is then split between all the unvetted nodes.

so the ratio would differ depending on how many nodes are being vetted at the time.

and since europe north have been idle for all of this year, then all nodes created since then needs to get vetted, which will slow everything down a bit.

usually vetting for a satellite takes from a week to 2 months, if it’s pushing data…
europe north seems to be pushing a good bit atm… so should be vetted in a month or so give or take a week… i would suspect.
ofc that would also depend on your current state of vetting for europe.n