How to limit / cap bandwidth (inbound & outbound transfer speed)

it’s also why i don’t think limiting the bandwidth is a great solution, because even at 80% limit, it would still be able to take 50% minimum, thus when you are working or watching videos, it will still be uploading at 50% bandwidth capacity and on a bad DSL line, that would reduce your download to 50% and because the upload bandwidth is capped then it will happen over 20% longer time periods.

so tho it’s a solution, it’s not a good solution, and tho it may make your internet seem to run better, over all there will be moving less data through it…

so temporary patch at best imo
ofc making a slow internet connection fast is the dream.

Can you not use ESXI to limit the bandwidth per VM? I know you can on proxmox and unraid.

I don’t know. i’m using free version of ESXi and don’t know if this feature (if it does exist) is included. If anyone knows how to do it, I suppose it would be an interesting option :slight_smile:

Im not sure which version your using I used 6.0 a few years back but I never needed to limit the bandwidth but Im assuming they all can support it.

Thanks!

I have ESXi 6.0 but didn’t find some of the options described in the tutorial. Anyway, it’s been planning to change my hypervisor (and use proxmox) for a while…
Even if limiting bandwidth with ESXi would be better, I prefer to do it on the OS itself because it would allow me to script it a little bit (and activate the bandwidth limitation on a specific timeframe or something like that).
Anyway, thanks for the tip :slight_smile:

I just tried but it seems a little bit harder ^^.
It is the first time I use “tc” command so I checked on the Internet how to run it to meet my needs. Didn’t find the perfect way to use it and I think I would need some help ^^
Here is what I tried:

  • Option 1: I tried to shape only egress traffic with this command (ens192 is my main network interface, on the Debian VM) (source):
    tc qdisc add dev ens192 root tbf rate 500kbit burst 32kbit latency 400ms. It’s working well but, as I thought, it is nearly impossible to SSH to the VM after that. But it’s “working” :slight_smile:
  • Option 2: Use a user-friendly script (source). I just had to set some parameters: network interface (I set “docker0”), max egress, max ingress, IP (I tried with the IP of docker0 and the IP of my main network interface). Whatever the limits, the actual bandwidth used by the storagenodes is nearly 0. So it’s very weird.
    Just to make sure the script is working well, I also run it by setting my main network interface (ens192) and it seems to work (seems the same result than Option 1).

By looking for a solution on the Internet, I can say that a lot of people try to limit bandwidth on Docker containers without very good results :confused:

Any idea on how to do it?

Try this script - I have adapted and simplified it from my router (though on my router it was only applied to upload):

#!/bin/bash

docker_int="docker0"
up_int="eth0"

up_speed="500kbit"
down_speed="1mbit"

node_port=12345
node_port_internal=28967

#delete all previous rules
tc qdisc del dev $docker_int root
tc qdisc del dev $up_int root

#create a htb qdisc
tc qdisc add dev $docker_int root handle 10: htb default 105
tc qdisc add dev $up_int root handle 10: htb default 105

#root class
tc class add dev $docker_int parent 10:0 classid 10:1 htb rate 1gbit ceil 1gbit burst 64k cburst 2k
tc class add dev $up_int parent 10:0 classid 10:1 htb rate 1gbit ceil 1gbit burst 64k cburst 2k


#default class - all traffic other than TCP ACK or Storj
tc class add dev $docker_int parent 10:1 classid 10:105 htb rate 1gbit ceil 1gbit burst 64k cburst 2k
tc qdisc add dev $docker_int parent 10:105 fq_codel
tc class add dev $up_int parent 10:1 classid 10:105 htb rate 1gbit ceil 1gbit burst 64k cburst 2k
tc qdisc add dev $up_int parent 10:105 fq_codel

#Storj traffic
tc class add dev $docker_int parent 10:1 classid 10:108 htb rate 100kbit ceil $down_speed burst 64k cburst 2k
tc qdisc add dev $docker_int parent 10:108 fq_codel
tc class add dev $up_int parent 10:1 classid 10:108 htb rate 100kbit ceil $up_speed burst 64k cburst 2k
tc qdisc add dev $up_int parent 10:108 fq_codel

#filters - these will determine which class gets assigned to a packet
tc filter add dev $docker_int parent 10:0 prio 1 protocol ip u32
tc filter add dev $up_int parent 10:0 prio 1 protocol ip u32

#Storj traffic
tc filter add dev $docker_int parent 10:0 prio 1 protocol ip u32 \
match ip protocol 6 0xff \
match ip dport $node_port_internal FFFF \
flowid 10:108

tc filter add dev $up_int parent 10:0 prio 1 protocol ip u32 \
match ip protocol 6 0xff \
match ip sport $node_port FFFF \
flowid 10:108

node_port is the external node port - the one you see in the cli dashboard.
node_port_internal is the internal port and probably should be the default.

This would limit the node traffic to whatever you have set, but leave other traffic alone. The shaping gets applied to outgoing packets on an interface, so the classes on $docker_int apply to “download/ingress” while classes on $up_int apply to “upload/egress”.

Ask me if something is not clear here, shaping rules are a bit complicated at first.

1 Like

Thanks a lot!
I’ll try this tomorrow and let you know :slight_smile:!

Thank you.

What if I have several nodes on my VM?

Their internal ports are most likely the same, so one rule would stick all traffic into that class, however, eternal ports are different, so you will have to repeat the filter rule with external port (the last one in my script) for each external port.

This will limit the total bandwidth to the set value, not per-node.

1 Like

would certainly be simplest to set bandwidth for the entire system, instead of on each vm, because adding and removing vms would then require you to change the bandwidth on all of them…

or if one decides to give them more or less bandwidth again you would have the make the adjustment in multiple places… that kinda stuff is always fun…

i would setup a designated virtual switch / network for the nodes which i would then run through the traffic control, so i get one place to control the collective tap for all storagenodes, ofc it sort of depends on how many you have and how often you tinker with that kind of stuff…

but i just know myself to well that i know i will regret setting it up so i have to change stuff in multiple places, because i end up testing stuff and changing stuff all the time, and thus i like the settings to be more manageable.

Thank you.
Just to make sure I’m not going to do something stupid:

  1. If I want to roll-back and delete all these rules, can you confirm that I just have to run the following?
    tc qdisc del dev $docker_int root
    tc qdisc del dev $up_int root
  1. Regarding internal and external ports to be set:
    I have only 1 Virtual Machine with docker installed on it.
    Each node is in a docker container with its internal Storj port (28967) mapped to a specific port of my VM.
    So, my 4 nodes are accessible through the following ports of my machine: 28967, 28968, 28969, 28970.
    So, based on my architecture (which is pretty standard, I suppose), could you confirm that if I want to limit the total bandwidth of all my nodes to the set value, I just have to repeat the filter rule with external ports (28967, 28969, 28970, 28971)?

Thank you for your help :slight_smile:

I don’t have multiple VMs. Only 1 VM with all of my nodes (hosted in docker containers).
But I agree with you: I would prefer a solution where I only have to limit bandwidth of the docker bridge but it is not that simple as I explained above (commands to limit bandwidth on a particular node works but doesn’t work when I apply the same rule on the docker0 bridge).

Another solution I was thinking about is to add a virtual interface in my VM.
This way, I can reduce the bandwidth of my main network interface (not the docker0) and still be able to use another interface to SSH to the VM if I need to.

i don’t think the docker bridge is used for anything when we use the docker run and designate an ip address, because then the image / container in docker goes directly onto you local network and thats where the traffic goes through… sure i guess it has to go through some sort of vmbr but when i checked my own network when docker is running docker is using some weird ip addresses that seem fully isolated from everything else…

so you might be simply bandwidth limiting the wrong docker vmbridge

the docker bridge would be a nice easy place to control it…
i guess you would use different ports anyways, since thats the usual configuration, then if all the storagenodes are on 1 ip address… does it matter… can’t you just use TC on that IP since it’s basically internally in the OS and thus should be able to be controlled before going out / in

i think if you are using the different port and same ip on all the storagenodes it might be easy to setup… but i duno… kinda just gut shot guessing here.

else ofc just throw it all onto one virtual nic on the vm and basically route internet through the vm to the docker nic and then you would have fully control because it’s basically inside the OS it’s running… no virtual protection barriers that separates data.

like you already suggested…

In proxmox its pretty simple to limit the bandwidth to the entire VM here is a screen shot of it…

I know once you have setup a system its not always easy to make a complete conversion but its pretty simple to go from ESXI to proxmox. If you needed to.

2 Likes
  1. Yes, this deletes the qdisc, classes and rules on the interface, putting everything back to default. If t was default already, it gives you an error, but nothing bad happens.
  2. just repeat
tc filter add dev $up_int parent 10:0 prio 1 protocol ip u32 \
match ip protocol 6 0xff \
match ip sport $node_port FFFF \
flowid 10:108

for each value of $node_port.

1 Like

I really want to install it but it seems that my hardware is not fully compatible with Debian (it is a HP Microserver Gen8 ; the fan always runs at 100%). But one day, I will take the plunge!

you really don’t want server fans to run a full speed if you can avoid it… those will take like 70 watts each… or mine does anyways…
so in the case of my server thats like 30$ in power every month, while if my damn system would control the speed it would be like 15$

but i solved the issue by just throwing more hardware in it, so it needed the cooling lol

1 Like

Ah ah interesting solution ^^

Anyway, I agree with you.

Well I went watercooling with my servers so not a real issue for me. I hate fan noises especially in servers .

1 Like