Bandwidth utilization comparison thread

well if one wants to do exabyte storage, then one needs to know why v2 didn’t work and only way to do that if one doesn’t completely understand it, is to run an experiment… so it was most likely test data and i think they also did offer free storage for a while.

so i’m not sure it’s a real problem, but sure would be nice if the stored data was x6 … ofc at present that would mean my storage was full… and then some…

and really thinking back to when i was running a v2 node just before v3 was announced i think there was 1600 nodes…
you trying to tell me that 1600 nodes stored 150PB… seems like something is off in those numbers…
ofc i was away a long time and apparently v2 didn’t get shutdown like i thought it did…

even now we only have 12000 nodes, thats reasonable to store 150PB because then it would be like 12TB each which again seems unlikely since most nodes will be running of a single dedicated hdd and i doubt they are top tier all of them.

On another note i continued my vetting node on a vetted node ip experiment with some slight modification.
to summarize, from the 1st of the month to the 5th i ran two nodes which are nearly mirrors of each other with a normal deviation in ingess of less that 1%
(more like 0.25% when looking at the full month graph)

step 2 now that i had a baseline i removed a node being vetted on node 1
then after 3 days their deviation was 4% on the monthly total, in favor of the node being alone on the ip.
checking up i found my vetting node was already vetted on US2 a satellite with no data basically.

but still it would take a significant part of the vetted node ingress, because this wasn’t clearly proving the point i was trying to make, even tho for all practical purposes it does prove that vetting nodes on existing vetted nodes ip’s simply isn’t worth it…

alas realizing my experiment was mildly flawed i reset…
kinda… i put a brand new node on node1’s ip address

and removed the node that was vetted on US2 from node2’s ip addess.
this was done the 8th, so about 72 hours ago.
the start deviation between the nodes in favor of node1 was 2.9GB

node1 unvettted + vetted

nodes 2 - vetted alone

their deviation is now down to 2.13 GB and have been decreasing…
because i switch around the location of unvetted node from being on node2’s ip to node1’s ip

the vetting node does seem to get more ingress that the vetted node on the ip looses.
but we will see how long that lasts… thats pretty good ingress from day 1…
not sure what that is about.

unvetted node on node1’s ip, created 73 hours ago

for now the numbers seems to say it might be worth it… but still even at this 7.35GB ingress the 15% is basically just stolen from the older node… i know it’s not super accurate yet… should be more clear numbers in another 3 days… ingress is so dead right now.

Dude your numbers look great I have a nodes that goes like this all of them finished vetting some are older than others all are on different IP`s and different subnets. The numbers are valid to this day since beginning of this month

Slovakia-1
Ingress-53.4GB
Egress-7.22GB

Czehrepublic -1
Ingress-31.6GB
Egress-4.39GB

Czehrepublic -2
Ingress-25.4GB
Egress-2.6GB

Czehrepublic -3
Ingress-110.7GB
Egress-12.0GB

The only issue with “Czehrepublic -3” is that the Ingress is approximate to the deletion rate hence it isn’t growing and is stuck on 1TB, I’m troubleshooting it currently but No idea why it is happening because no audits fail uptime is nearly 100%.

Non the less your numbers look great

1 Like

i would say only Czehrepublic -3 has finished vetting
remember your nodes needs to vet on each individual satellite to get full ingress from them if they are pushing data.

but there isn’t anything unusual in vetting taking longer on satellites not pushing data… so you can have near 100% possible ingress for vetted nodes and then when the data switches and comes from a new satellite you drop down to 5% or whatever…

i got 4 month old nodes that yet hasn’t been vetted on all satellites but i think i’ve been making a mistake in leaving them on the same ip…
from what i can see it’s simply not worth it, they vet slower, they take ingress from the older nodes and all in all just a terrible idea.

Well I’m lucky As I Have access to multiple IP addresses on different subnets, Yes the CZ1 and CZ2 are new but they will tomorrow celebrate 2 months old, Is 2 Satellites haven’t fully vetted them As I suspect they aren’t pushing as much data,

To be honest I don’t mind that much what I need is that the CZ1 and CZ2 reach 5Tb each within 9 months or close to that so that they will be fully self sufficient. Because than they will be able to cover the costs of running them

yeah ingress does seem to be getting more and more scarce as the months go on… i think all the crypto hype might have created a influx of new SNO’s and then maybe then on top a few other factors.

hopefully it will atleast keep at the 500gb ingress pr month… thats not to bad…
and ofc then the deletions are a bit less… but i doubt we will be at 100% deletion to ingress amounts forever… has to be a phase
or an error in the storjlabs test data… i mean a node with 1tb data getting 500gb ingress and 500gb deletions… well thats basically just uploading data to delete it again…

but thats what tests are for… maybe they are optimizing the deletion methods.

i know there has been some issues with that in the past…

1 Like

Also take into account the new planned Reed Solomon scheme that is supposed to reduce the redundancy requirements. I read it that this will lead to less ingress and less required storage capacity.

2 Likes

i see satellite dishes in our future… or wait is it… extraterrestrial

anyways sounds kinda cool… will be interesting to see if the performance picks up…
sucks we get less data but thats all relative… less waste means more client data which means the current pay scheme can last longer…

so most likely not all bad.

1 Like

Usage/repair ratio appears to be returning back to normal… It was 1:5 at points, but now less than 1:2

For those that haven’t gotten updated to … v1.25.x and is still on v1.24.x
it seems that the ingress limit might have been turned on last night…

updated to v1.25.x can be seen pretty clear that it picks up again after the update.
duno exactly that it was the case, but was checking versions allowed a few days ago, and now my ingress drops to basically zero, i recheck the version minimum and it’s the v1.24 …
so not sure what else i am to think…

was kinda hoping to jump directly to v1.26.x
which is like a day or whatever from releasing on docker, been 16 days since the last one was released…

so maybe it’s just late and all this is automatic… but still kinda lame that one cannot skip an single version without getting punished.

anyways wanted to inform those of us who can only accept manual updates.

Given I have 11TB of space to take data, my recent ingress isn’t as good as @SGC

I think the fact you are vetted on Europe north, is making all the 20x difference

most likely your node isn’t fully vetted yet on the satellites currently pushing data.
this node is about 13-14 months, i think any node younger than 5-6 months doesn’t seem to be vetted on the active satellites.

but with the current ingress being pushed from those until recently inactive or nearly inactive satellites, will result in nodes actually being vetted for them in much shorter time.
and like before it only needs to be done once.

and my node is also running on 1gbit synchronous fiber, 1tb ssd read cache and write cache and with multiple hdd’s, running on a server, so it’s near the peak of what one would expect, ofc with the low traffic these days, almost no systems are stressed, so not really any advantage presently.

There is still hope…

1 Like

Yes, i am still waiting for vetting on Europe north… Can @SGC share your Europe north ingress? Cheers

the end of 12th and beginning of13th is basically a bust, due to me just well 2hr and 53m ago updated to v1.25
on top of that i was about 10% behind for the first 5 days of the month due to me experimenting with vetting nodes on the same ip, but that was also a bust/bad idea, so the node is now alone on the ip.
so some can have 5-10% more than this…
image

LOL… Given all your testing and you are still 35x more… I thought a vetted node was only 20x more than an unvetted node, not 35x!! Thanks

doesn’t exactly work like that…
unvetted nodes get 5% of the total ingress to the network, which is then split between all the unvetted nodes.

so the ratio would differ depending on how many nodes are being vetted at the time.

and since europe north have been idle for all of this year, then all nodes created since then needs to get vetted, which will slow everything down a bit.

usually vetting for a satellite takes from a week to 2 months, if it’s pushing data…
europe north seems to be pushing a good bit atm… so should be vetted in a month or so give or take a week… i would suspect.
ofc that would also depend on your current state of vetting for europe.n

Just to give you some extra data: I have two nodes behind the same IP.
1st one is 7 month old:

2nd one started on March 1st:

1 Like

Node vetting on vetted node ip vs solo node on ip.

Getting the same results…
now 5 days in the 3 GB ingress difference has been basically negated so both nodes are back to being even… ill leave it running the month out and do a couple more updates on the numbers…
but this is pretty clear imho.

node 1 vetting + vetted node sharing ip

node 2 solo vetted node

and double checking the node hasn’t vetted all of a sudden…

alas the difference remains… nearly 3 GB or 2.5 GB out of 30 GB ingress so maybe slightly less than i was seeing on the other slightly vetted node… but pretty close to the same… this is also an only 5 days only node.

so maybe 5% loss of ingress and it seems to be accelerating, since first 3 days was less than a GB difference and now it’s up to 2.5 GB, i suspect in a few days i will be seeing around the 10% i have kinda been seeing across the board when i vet nodes on the same ip’s

and 10% less ingress on older nodes is simply worth also… i duno that it’s 10% tho… its just been the approximation i started with… because that was the difference i was seeing on my node against others when they posted their ingress…

but clearly it’s already in 5 days up to like 5% so would be atleast above that… the first days are notoriously slow on a new node, so most likely offsets the results a bit… but leaving it running and taking stock at the end of the month, so or in a week and then end of month, don’t what to leave it on my node longer than that…

ifs my 2nd oldest node and so ingress loss on that hurts a bit…

oh and yeah lets see the numbers for the vetting node… see if that has the ingress… 2.5GB
yeah looks like it… not sure what the spike at first was… also didn’t seem to offset the test…
but after it stabilized it seems the ingress lost from the vetted is basically what is coming in on the vetting node.

it may still be that vetting nodes get 5% of the total network ingress but whatever ingress they seem to get usually… i disregard the initial spike as a deviation from whatever…
alas the data is subtracted from the ingress on other nodes…

which is only fair ofc… else one would get more data by vetting nodes on the same ip as vetted nodes :smiley: just saying… so basically i’m saying

vetting and unvetted nodes get the same total ingress as a single node when working in stable conditions… like not the first day

can’t prove that because i switched them around and i refuse to redo the experiment to verify that subnets share data evenly, it’s been the result always…
seems to indicate that it’s the same again…

which is good… :smiley: no way to game the system.

does someone know what’s up with the name of the satellites ?