How to limit / cap bandwidth (inbound & outbound transfer speed)

Hi!

I just moved to a new house and the Internet connection is not as fast as my old one :confused:
Sometimes, my internet connection is very slow because of high traffic to/from my nodes.

so I was wondering:

  • Is it absolutely not recommended to limit bandwidth (I mean: limit inbound / outbound traffic)? I think my current Internet connection doesn’t allow me to meet minimum upstream and downstream bandwidth requirements but it doesn’t seem to be a problem so far. I would reduce it only when I really need it.
  • If so, how would you recommend to do it? My ISP router doesn’t allow me to define QoS. I was thinking about some docker config or a specific linux tool but I never did that and don’t want to reinvent the wheel.

I already checked other threads on Storj forum but they are quite old now and things may have changed since then.

As always, thanks for your help guys :slight_smile:

1 Like

you put some piece of hardware or software between the storagenode and the internet, which limits the maximum bandwidth…

not a great solution tho, maybe you can limit the number of allowed connections, because that’s usually what slows down internet connections…

think of it like this… the more connections that something will use, the less the fraction of the full speed something else will get…

if we say the storagenode uses 50 active connections, and your browser uses 2, then the browser would get 2/50 of the full internet / network capacity… if the connections are pulling or transmitting at maximum bandwidth ofc… connections can be passive and use next to nothing, but the active connections will be the 100% bandwidth and the connections the device you are trying to evaluate bandwidth of will be the count of the fraction.

device / full number of active connections = fractional speed of full capacity.

this is why some applications can choke an internet connection while other seemingly much more demanding applications can be running and everything seems to run fine, even tho it uses 100% of the internet bandwidth… because if it only uses 1 or a few connections, then when a new connection is opened it will still get 1/2 or 1/ afew of the full bandwidth.

so personally thats what i would look into setting up… a limit on how many connections the storagenode is allowed to use at once, then it would still get 100% bandwidth when there is capacity for it and it wouldn’t slow down the internet in any meaningful manner…

1 Like

Option 1: buy a router that has QoS.
Option 2: If your node in on Linux, you can use “tc” to limit the bandwidth.

Thanks!
Is there any risk of reducing bandwidth (upload/download speed) for storagenodes (except having less traffic and, then, probably less earnings)?
I mean, may I be disqualified or suspended for using a such mechanism (directly or not)?

Thanks :slight_smile:

whats tc?..20…

@jeremyfritzen
no not that i’m aware of anyways, ofc you should keep the bandwidth above the minimum designated by Storj Labs i think its like 25 down / 5 mbit up

and you will not get 100% efficiency out of your internet if you limit the bandwidth, which then in theory could compound the issue because the storagenode would still try to push the data through, ofc it would be cancelled at one point, but that just means what was downloaded was a waste.
so it ends up being poor solution.

1 Like

I already found this kind of tool but would like to have opinions from the Storj community.
I also found other tools, such as trickle or wondershaper.

1 Like

Won’t work. The storage node binary is written in Go, which always links statically to libraries. trickle requires dynamic linking to intercept OS calls. You need a kernel-level shaper for storage nodes. wondershaper should be fine.

i would use something like this instead ofc that does require that the OS is dedicated for the storagenode, else i think you run into the same issue because even tho it cannot hope as many connections, it will still send the same traffic and thus the congestion on the machine would be the same.

this solution only helps keep the storagenode machine from spamming the network with connections and thus creates balanced distribution of internet between networked devices.
instead of the storagenode taking everything.

also limiting the bandwidth might in some cases make the issue worse, but yeah… it kinda works.

remember you have to decide the bandwidth limitation, you want to go 20/80% or 50/50%
and say you set it at 50% to the storagenode, then everything will take twice as long, meaning you will get the other 50% twice as often.

ofc then you could go the other way, 20% for the storagenode, which would either make it less than the minimum requirements if it was running poorly to begin with…
and if we consider 80% for the storagenode, then you would more or less be where you started just that you get a little extra boost…

so i don’t really see the advantage, and the storagenode will still try to push the data through, meaning more will be cancelled, meaning more bandwidth is wasted without any reward, and what does come through will be over an extended time thus making whatever you are doing affected more often…

i have used bandwidth limitation in the past, and in some cases it might work or even be a good choice, but it’s really not a great tool, not like limiting connections… which i’m sure can be it’s own can of worms… but thus far it’s been my preferred solution for when network stuff runs unbalanced…

i really like it because it will allow 100% bandwidth and it will scale all by itself with basically no settings of later configurations, just limit the number of connections used.
instead of cutting it into fixed bits, with bandwidth limiting.

ofc today QoS basically does the same thing, just better

if you give the storagenode like 10 connections then it should be fine, you might be able to go lower, but it might also affect how well it will work, since it does do a lot of network communication, 10 connections should be fine for keeping the network balanced… you might even be able to go to like 40… but i wouldn’t go above 20 and only if the storagenode acted up… if it didn’t it would maybe put it at 2-5 connections… but i doubt you can get away with that.

enjoy

1 Like

Thanks!

I have a debian VM (hosted on a ESXi) dedicated to my storage nodes.
I would like to reduce the bandwidth but should I create another network interface just to split like this:

  • network interface 1: dedicated to storage nodes with a limited bandwidth
  • network interface 2 : for local network purposes (just to make sure that if my primary interface is saturated because of storj, I will still be able to connect to it without issues through ssh).

Is there a better way to limit the bandwidth for the storage node without having issues to connect to the VM?

[EDIT] After all, maybe I don’t need to split the network into 2 interfaces on my Debian VM :thinking:. Indeed, the bottleneck is not on my local network capacity but on my Internet connection. So even if my Internet connection is saturated, I still should be able to connect without issues on my VM (at least, I never had this problem so far, except because of other issues such as high Disk I/O).
Do you confirm my understanding?

Thanks for this detailed answer and different point of view!
Actually, limitating bandwidth would only be a temporary workaround. I would enable it only when I really need it and when my network is so saturated that I can’t do anything (visioconference, movie streaming just for 1h or 2h, etc.).

Limitating the number of connections seems also interesting. Is it just a parameter to set in Storj or is it a general network configuration on OS level?

--storage2.max-concurrent-requests. Will allow more downloads though, because otherwise it might affect audits.

If the node is on docker, then there is a virtual bridge and a virtual interface that corresponds to the container. You can limit there. It would affect your access to the web dashboard though, unless you used a more complex filter to only limit the Storj traffic.

1 Like

it’s basic tcp/ip configuration, so limiting the number of connections on the storagenode VM OS would imo be an easy solution, and if it runs fine with say 10 or less connections, i doubt you will have to turn it on or off, i mean everything should just run smooth after the limit is set…

i added a link to what seems like how to configure it in linux, haven’t tried setting it up myself in linux anyways, been mainly using windows 98% of my time spent on working with computers, but the concept and protocol is the same, so it’s not really OS dependent.
i’m just not familiar with the exact way to implement it in linux, so the method i linked might not be correct, but sounds like he figured it out.

else i’m sure there are some very detailed descriptions out on the interwebz, should be very straight forward, when one knows where to put in the number. :smiley:

Thanks!
I don’t get your point when you are saying:

Will allow more downloads though, because otherwise it might affect audits.

It would be pretty bad if you denied an audit connection, so you shouldn’t block download connections.

Ok. And by using --storage2.max-concurrent-requests, is there a risk I deny an audit connection?

It’s designed to allow all downloads. Only uploads are controlled by this switch.

Thanks.

So… Concretely, which configuration would you recommend in my situation?
Adding “–storage2.max-concurrent-requests=10” in the end of my docker run command?

10 is a little low. Healthy routing should deal with tens of thousands of concurrent connections…

OK.
MMmm, 2 questions come to my mind:

  1. You said that “–storage2.max-concurrent-requests” only controls uploads. Is that really what I should do if my connection is saturated by Storj? Shouldn’t it be better to control downloads?
  2. To start, at which level should I try to set this parameter (since you’re saying 10 is far too low)?

thanks