Bandwidth utilization comparison thread

you forget the market reaction on the SNO side. The more popular tardigrade gets, the more popular it becomes being a SNO because the more ingress you get the more money you make.
So with tardigrade’s popularity the amount of SNOs will rise too and therefore the ingress per node will drop. It might still rise but not as much as the network ingress.

2 Likes

i think there will be a equilibrium point, where existing SNO’s will be established enough and distributed enough that new SNO’s will have more difficulty entering the network for profit.

usual supply and demand stuff… ebbs and flows
in active periods we will see lots of new SNO and during ebbs many will quit because the profits might not be what they where hoping for, also as data storage becomes cheaper our payouts pr TB will ofc be dropping.

i doubt it will look much different than how the datacenter stuff started and how it grew, ofc just in a much more distributed way.
today it would be quite difficult to get into the data center market, because their profit margins are kinda tight and their provided services require some fairly complicated and maintained infrastructure.

but will certainly be interesting to see how it will go, thats for sure…
all we can do is guess and project from our limited information / perspectives.

1 Like

That’s only if you have SMR drives lying around though.

There were some improvement to go easier on HDDs, but if ingress goes beyond what an SMR drive can cope with (especially with tiny files like Storj uses), the Node’s dead.
The only way to deal with that would be for the Node to start automatically refusing incoming pieces when the stack of files waiting in RAM for being written down to the disk goes beyond a certain threshold but I’m not aware of anything implemented to handle this, AFAIK.


But thx, @kevink @SGC I think you guys rose valid points.

2 Likes

i avoid SMR like the modern plague :smiley:
yeah well another good reason to have plenty of RAM, tho my RAM usage seems to be rather unstable, but now i finally got my main node moved over into a container, so been trying to monitor and compare RAM utilization between that and the new nodes… and there sure does seem to be quite the difference even if it will respect limited memory states…
it does like to peak from time to time, even without there seeming to be any direct cause…

but with 15 HDD i suppose there is always something no running as it is suppose to.

finally got my PfSense working… turned out that my paravirtualization nic drivers, made some weird problem where the lan machines could see the dhcp server, get ip addresses route traffic around, but pfsense wouldn’t route internet traffic to the lan…
seemed to work, pfsense had internet could update and ping stuff… the lan would also be able to do dns and get online ipaddresses returned, but no data.

so weird and i being new to pfsense ofc assume i had some kind of settings wrong, until i had basically tried everything…resorted to experts and then kept testing until i eventually got so far out that i changed the nics to virtual nics and then it just sprang to life without me basically doing anything at all post install… took 20 minutes or so afterwards… lol 18hour of dt tsk tsk

i need to up my game if i don’t want to get close to a suspension… up to 3 maybe close to 4 days of downtime over the last month now i think… all this isp and local network infrastructure organization is just hell…

VLAN rules tho, if one likes or needs complex network setups with limited hardware and cabling.
worst part about the new setup is the added latency… i don’t like that one bit…
got an additional 15-17ms to local routing, but it works… so ill take that hit for now.

ofc maybe that’s not to bad, after all it goes from google, through isp-wan, out the fiber modem, coverted into running over the wall sockets( didn’t have enough cables) into a vlan able switch, 40 meters to the server, hits vmbr which a vm pfsense is connected to with emulated nics in both directions, which shares / routes the ports over its lan to the storagenodes and then all the way back again… LOL

so i guess 15ms isn’t to terrible, also the wallsocket EOP (ethernet over power) or whatever its called is just 100mbit :smiley: so that sure doesn’t help the latency as that will take 10 times longer to respond than the 1gbit due to its much lower frequency.

haven’t actually tested it, but the gamers on my network always complain when i try to squeeze them onto a 100mbit internet :smiley: and it sort of makes sense… think i looked it up…also atleast the frequency thing…

I have tried the Ethernet over power but they were 1gig, Still slower then my wireless though. The way they designed my house is fiber comes in the basement and I have no way to get my 10gig network upstairs so I tried pretty much everything.

My powerline connection has 1300 mbit and works very stable

old adapters i found on a flee market a couple of years ago, and tho they aren’t exactly perfect, they are just such a lifesaver when wifi have problems with walls or such.

i absolutely love this solution… no doubt it adds latency and isn’t perfect, but i think this is like the 5-6 th time i used them to make my life easier, plug and play :smiley:

paid 4$ for the pair along with a POE injector that i haven’t used.
the Ethernet over power tho seems pretty good… tho they also have issues with going through fuse boxes and surge protectors, need some raw copper and without to much junk inbetween.
i can get them to do 30meters or so and still have a working connection…

used it for running some internet for tv for a while, but the took them out and pulled a cable in one of my many attempts to solve a problem with the computer loading when live streaming… not much just randomly figure it was bandwidth or the microwave or some other disturbance that would affect it…

turned out they where fine… i just assumed it was the most likely point of failure, because i was unfamiliar with them.

didn’t help… tsk tsk, even my STP cable didn’t fix the issue… next up i think is to try and take the computer / AIO apart and try to clean the cooler, tho ti doesn’t seem to be overheating, but it’s done something similar before… so maybe thats whats going on.

so the EOP adapters just bounce around where there is need for a cable… if they will work… often it’s not possible, but sometimes it’s just a beautiful solution.

i should look into getting some better ones…

14 days left on my old internet subscription, and then i will no longer have use for two cables anymore anyways… so its a nice patch while i got two internet connections to route to my server which is like 60 cable meters away from the ingress of the fiber…

did you solve the problem, infiniband is fairly affordable and long range for fiber, it’s been what i would have pulled today instead of being stupid and pulling a 40gbit rated STP cable to my server room, i knew fiber was better but i figured it would be much more problematic and more expensive… but seems i was very wrong on that… and ofc then there is the whole debacle with overvoltage going from the network and into the server room so eventually it will have to become infiniband… but for now i work with what i got…

10gbit T-Base becomes expensive so quickly.

No I never solved the problem, I don’t want to cut any holes in my house to run any wires so I just deal with all my fastest connections in my server room, then I just have 1 phone cable that I converted to a cat5, I can’t run a 10gig though this I wish I could, But the main issue is that my house is 3 stories and wireless is non existent on every floor less I have an access point. The way they insulate the new houses wireless signals can’t actually go though each floor. So unless I was to run a fiber cable from basement to my 3rd floor in my office ill never have a 10gig connection upstairs, which kinda sucks but I have learned to just deal with it. The developers should have had ethernet in every room in 2015 makes no sense why there isnt.

how much data you can push through cat 5 depends a lot on the length of the run… but i don’t suppose you haven’t tested how high it will go… cat 5 can do 1gbit if memory serves, not sure if you could get 10gbit if only like a 10 or so meters of cable run… lots of factors come into play.

but it being an old phone cable doesn’t really bode well

ofc depending on how the phone cable is pulled one might be able to use that as a guide cable or what not to get a new cable into the wall without to much work… ofc that runs the risk of just ended up with no cable… so… :smiley:

generally i want 10gbit, but it’s so limited when i would really need it at present.
been thinking of making a 10gbit backbone on my infrastructure tho… seems to be the most sensible solution.
10gbit Base-T is just so expensive, apparently because of the signal processing required, so it will get cheaper… but never near the levels the fiber technologies offer i guess…

i kinda have the same problem, not sure why my wifi signals have so much trouble… but they do.
especially the faster modern ones… feels like they can be blocked by a piece of paper sometimes.

what i ended up doing was making an arm of wireless on the network, which is connected in one end and then is just a few wifi extenders in sequence being a mesh, so they overlap and have the same ssid so wifi is well accessible over all, even tho the latency isn’t perfect and not sure how well that would work in a city either… wifi in the city is kinda hell, has to be so loud not to be disturbed.

the solution works pretty well, basically the is sent like a beam through a floor because it cannot go through the masonry in the bottom floor, but since the top is mostly wood frames and fiber boards and such, its can easily punch through.

ofc that’s only like 1gbit wifi… not sure how fast one can get it these days, but it’s not really that old…
but yeah network and older building without proper cable “guides” or whatever they are called… its hell

The phone cable in my house isn’t some old cable it’s actually a cat5 cable they just wired it up for a phone not Ethernet. But the problem is the location of this and it’s in my master bed room which is no where near my office, I could run another cable to my office obviously, I mean I can still do this. It just won’t be 10gig line. I think only cat6 supports a 10gig line if I remember correctly.

true there maybe some hard limits on cable ability… i’m pretty unaware of the actual limits of the cables, just know that as a rule of thumb the shorter the run, the higher you can usually push the cables… even if they aren’t in theory rated for the throughput, but like with everything like this, its a raffle…
if the cable was bent to rough a few times and it might not be doing any magic tricks.
ofc might also depend on if it’s a proper wall cable or just regular cable… the stranded kind is nice and flexible and do perform quite good, but the raw cobber wired ones will give so much better a connection, until they are mistreated…

cat5e seems to be enough, i think people say 10-20 meters and it should work basically every time on a non mistreated cable… and solid core cables i guess they are called…(in some parts of the world) the site there claims 45 meters and tho i’m sure that may be true for well maintained and proper installed datacenter setups… much less is a bit more realistic… but it seems it can certainly be done …

could most likely also get cat5 to do that, just shorter distance but duno… :smiley: not my problem
and ofc one wouldn’t know until one has tested the cable in some reliable way…

lol, this thread could well be named “Lounge”; it’s crazy how much digression there is in here :laughing:

Anyways, traffic has been pretty steady for the past 3 days:

Not amazing bandwidth though I must say.

2 Likes

here’s mine from yesterday:
Node 1:

Node 2:

Node 3:

No we won’t. As @Pac already mentioned, such speeds wouldn’t be sustainable long term as free node space would fill up too fast. The only way for those kinds of loads to be possible for longer than those tests would be to have more nodes on the network which would spread out traffic more as well and so load on individual nodes would not go up. Those test loads will likely be the peak or close to it.

Based on what exactly? Please show your work.

If you’ve got a chimney, run fiber up through the void space around the outside of it in the walls and you’re good to go.

just what i think, because it’s rare to see anything being tested to the same level as it’s used in computing… like if you test your network… you will have difficulty doing proper tests because of the geometry of it… since you cannot be everywhere at once, and tho it in theory can be tested like that, we usually just test a network by running a chunk of data across it across a few paths and then monitor it instead.

similar with storage, tho a datacenter will test their setup, speeds and such, then they won’t fill up their data capacity with test data unless if they are doing some kind of research, and still it doesn’t seem very likely… imo

how would one ever create more test data than the internet can… it’s put datacenters the size of cities to their knees, sure one could in theory overwhelm a small node network, like say by sending enough data over a sustained period until most of the storagenodes fill up, which would show the behavior of the network over time as capacity gets low, which leads me to the first.

when this drowning in data happens, the nodes would ofc fill but that would put the remaining load on those remaining that doesn’t fill…which very well could end up maxing out their disk speeds or internet bandwidth / local network bandwidth depending on where the bottleneck turns up ofc…

but i do suspect that will be different in many cases, like say i’m not sure how well my zfs / cpu’s will do with regular storj data at 120-240 MB/s
got to thinking, i’m not sure my fiber will do 240MB/s because i’m pretty sure it’s only a single fiber and i do believe that single fiber connections aren’t actually “full duplex” if it’s called that anymore and even tho it’s a synchronous…

do sometimes see some fairly intensive workloads on the cpu’s when really busy with storj data, even without many vm’s running

so i would say it’s really down to logical deduction, that’s simply how the network will react to excess data, and storj being aimed at specifically having the largest data capacity, would also mean projects with incredible data storage demands might move in, or will …

i mean one doesn’t register a service with like the worlds biggest storage capacity to store family photos, even if some might… then the whole point is that storing stuff in many places and having it easily accessible in one “place” / platform is kinda the whole point.

but that’s my views on it i think… i’m sure there are plenty of views and concepts that i haven’t touched on or considered, i would imagine it to be fairly accurate… without digging into the nitty gritty of storj network / tardigrade, instead comparing it to other known behaviors of similar things when used and tested by people.

just an opinion ofc…
and really isn’t everything just opinions and stories, i find there are very few facts in reality, of which most of them are relative…

i imagine that it’s based upon logical deduction… but that is also limited to my perspective and thus again will be a limited perspective.

Is that so? Where I work we usually do the opposite: we run ramp up load tests until the system breaks, to make sure it holds way beyond the maximum load it will have to sustain.

It all depends on the nature of what you’re testing. I must say a decentralized network might be a difficult one to test, but most softwares do not need to be put on their knees: we just need to make sure it works up to the objectives/target performances.
I’m sure that’s what StorjLabs did.

And if your argument is that in case of an unanticipated storm of usage the system could sink… Then you’re right, but that’s true for any system :slight_smile:

Get back here you dreamer! :smiley:
There are plenty of very concrete notions and facts in reality! Computer science and technologies are very good examples ^^

1 Like

i do the same in the real world, but you can’t really break a network connection by stress testing it… atleast digitally… you can overload it… when it comes to the geometry and mathematics of computing and networking… some basic notions are not like other “real” things.

ofc you can “stress” test it until it breaks from the outside… and you can overload it by using it…but that shouldn’t break it, or it was built wrong to begin with…

from what i’ve learned the fundamental nature of the universe is like a fluid and even math is a subjective language using syntax…
sure there are some fundamentals that we can agree upon… but there are not really as many as you might think…

i think all in all there are like 4 anchor points in science and everything else only works in relation to or are derived from those… they are like fundamental ratio’s or something found in nature, even the speed of light is not even fundamental to these… numbers… i forget what they are…

pretty sure it was like ratios, but doesn’t really matter… one will go mad trying to understand it anyways, so i try not to…

when one pull in the threads of the fabric of the universe and realize that it all just comes undone, one starts to realize that it is within that framework that our minds exist and comprehend reality.

WOOPPSSSSSSSSSSssssssssssssssssssssss… O.o

I really don’t understand where you’re getting these numbers. That would be 20TB per day. And you think speed will be the issue? At that rate the network would be at capacity before the day is over. But that will never happen. Customers have to request limit increases, so Storj Labs would be aware of this massive demand and would take that opportunity to scale up. We’d likely see surge payouts return. They may even spin up some nodes themselves to make up the difference for a while. Worst case, they would simply deny the limit increase if it’s not something the network is ready for.

Logical, I like that. Again, please show your work. Last time I asked that you said it’s just what you think, but now you have a logical deduction, so I would love to see it!

Here’s a logical deduction for you:

  1. Allowing nodes to fill up in a day or even a week or month would be a death sentence for the network
  2. Storj Labs has several levers to play around with to encourage either node growth or customer growth and create a workable balance for both sides
  3. Storj Labs could spin up nodes themselves or partner with others to do so to temporarily deal with an increases in demand
  4. Storj Labs can deny limit increases as a last resort if demand far outpaces availability

With these methods they will ensure that supply and demand is always balanced. Leaving sufficient available space at all times across a sufficient amount of nodes. Since it is vital for the survival of the network to ensure this, we can assume they will always ensure this will be the case.

Then there is one last thing…
5. Storj Labs has the unique ability to push beyond those limits safely since they can stop their own testing at any time.
This last one heavily suggests to me that that is exactly what they were doing in those tests. Finding out how far you can push nodes to determine the limits of what a safe balance between supply and demand looks like. This was tested when the network was still really small, so it didn’t require internet scale loads. But because of the decentralized nature, when the network is ready for that kind of scale, there will also be enough nodes to distribute that load. The bottleneck at that point wouldn’t be node performance, but rather satellite performance (though that too can be scaled up, it’s just a bit more complicated than just adding nodes).

That’s what a logical deduction looks like. Even though this deduction isn’t based on perfect information, you can at least see the logical reasoning behind it and when better information presents itself you can point to the flaw in its reasoning. So feel free to poke holes in it, I’m sure it isn’t bullet proof. But for now, I’m going to go with what this suggests. And that’s that the peaks we’ve seen during testing will likely be pretty close to the max load we’ll ever see.

2 Likes