How to limit / cap bandwidth (inbound & outbound transfer speed)

it’s also why i don’t think limiting the bandwidth is a great solution, because even at 80% limit, it would still be able to take 50% minimum, thus when you are working or watching videos, it will still be uploading at 50% bandwidth capacity and on a bad DSL line, that would reduce your download to 50% and because the upload bandwidth is capped then it will happen over 20% longer time periods.

so tho it’s a solution, it’s not a good solution, and tho it may make your internet seem to run better, over all there will be moving less data through it…

so temporary patch at best imo
ofc making a slow internet connection fast is the dream.

Can you not use ESXI to limit the bandwidth per VM? I know you can on proxmox and unraid.

I don’t know. i’m using free version of ESXi and don’t know if this feature (if it does exist) is included. If anyone knows how to do it, I suppose it would be an interesting option :slight_smile:

Im not sure which version your using I used 6.0 a few years back but I never needed to limit the bandwidth but Im assuming they all can support it.

Thanks!

I have ESXi 6.0 but didn’t find some of the options described in the tutorial. Anyway, it’s been planning to change my hypervisor (and use proxmox) for a while…
Even if limiting bandwidth with ESXi would be better, I prefer to do it on the OS itself because it would allow me to script it a little bit (and activate the bandwidth limitation on a specific timeframe or something like that).
Anyway, thanks for the tip :slight_smile:

I just tried but it seems a little bit harder ^^.
It is the first time I use “tc” command so I checked on the Internet how to run it to meet my needs. Didn’t find the perfect way to use it and I think I would need some help ^^
Here is what I tried:

  • Option 1: I tried to shape only egress traffic with this command (ens192 is my main network interface, on the Debian VM) (source):
    tc qdisc add dev ens192 root tbf rate 500kbit burst 32kbit latency 400ms. It’s working well but, as I thought, it is nearly impossible to SSH to the VM after that. But it’s “working” :slight_smile:
  • Option 2: Use a user-friendly script (source). I just had to set some parameters: network interface (I set “docker0”), max egress, max ingress, IP (I tried with the IP of docker0 and the IP of my main network interface). Whatever the limits, the actual bandwidth used by the storagenodes is nearly 0. So it’s very weird.
    Just to make sure the script is working well, I also run it by setting my main network interface (ens192) and it seems to work (seems the same result than Option 1).

By looking for a solution on the Internet, I can say that a lot of people try to limit bandwidth on Docker containers without very good results :confused:

Any idea on how to do it?

Try this script - I have adapted and simplified it from my router (though on my router it was only applied to upload):

#!/bin/bash

docker_int="docker0"
up_int="eth0"

up_speed="500kbit"
down_speed="1mbit"

node_port=12345
node_port_internal=28967

#delete all previous rules
tc qdisc del dev $docker_int root
tc qdisc del dev $up_int root

#create a htb qdisc
tc qdisc add dev $docker_int root handle 10: htb default 105
tc qdisc add dev $up_int root handle 10: htb default 105

#root class
tc class add dev $docker_int parent 10:0 classid 10:1 htb rate 1gbit ceil 1gbit burst 64k cburst 2k
tc class add dev $up_int parent 10:0 classid 10:1 htb rate 1gbit ceil 1gbit burst 64k cburst 2k


#default class - all traffic other than TCP ACK or Storj
tc class add dev $docker_int parent 10:1 classid 10:105 htb rate 1gbit ceil 1gbit burst 64k cburst 2k
tc qdisc add dev $docker_int parent 10:105 fq_codel
tc class add dev $up_int parent 10:1 classid 10:105 htb rate 1gbit ceil 1gbit burst 64k cburst 2k
tc qdisc add dev $up_int parent 10:105 fq_codel

#Storj traffic
tc class add dev $docker_int parent 10:1 classid 10:108 htb rate 100kbit ceil $down_speed burst 64k cburst 2k
tc qdisc add dev $docker_int parent 10:108 fq_codel
tc class add dev $up_int parent 10:1 classid 10:108 htb rate 100kbit ceil $up_speed burst 64k cburst 2k
tc qdisc add dev $up_int parent 10:108 fq_codel

#filters - these will determine which class gets assigned to a packet
tc filter add dev $docker_int parent 10:0 prio 1 protocol ip u32
tc filter add dev $up_int parent 10:0 prio 1 protocol ip u32

#Storj traffic
tc filter add dev $docker_int parent 10:0 prio 1 protocol ip u32 \
match ip protocol 6 0xff \
match ip dport $node_port_internal FFFF \
flowid 10:108

tc filter add dev $up_int parent 10:0 prio 1 protocol ip u32 \
match ip protocol 6 0xff \
match ip sport $node_port FFFF \
flowid 10:108

node_port is the external node port - the one you see in the cli dashboard.
node_port_internal is the internal port and probably should be the default.

This would limit the node traffic to whatever you have set, but leave other traffic alone. The shaping gets applied to outgoing packets on an interface, so the classes on $docker_int apply to “download/ingress” while classes on $up_int apply to “upload/egress”.

Ask me if something is not clear here, shaping rules are a bit complicated at first.

1 Like

Thanks a lot!
I’ll try this tomorrow and let you know :slight_smile:!