Limiting storage node egrees

The amount of egrees my nodes recently perform is getting uncomfortably close to what my Internet connection can support without negatively impacting my home usage. If my node will suddenly start receiving high-egrees pieces, it might be quite annoying. And so I started thinking of the best way to control egrees.

I recall there used to be a parameter within the storage node software to declare the amount of egrees bandwidth available, but I also recall that it never really worked, and that it got removed.

I could set up some basic traffic shaping, but I fear the effect of naĂŻve shaping would be detrimental to the Storj network as a whole and to my earnings: it would equally split the available bandwidth over all concurrent downloads, making all of them slower and all of them at risk of becoming dropped, making the experience of all served customers worse. Besides, it would also pile up more and more requests, just like we observe with slow disk I/O requiring more and more RAM.

Instead, following the well-known rule that shaping is best performed directly at source, it would probably be better to shape traffic inside the storage node code itself. I imagine an implementation where the node prioritizes transfers in the following way:

  • First, transfers that have already started. If we’ve decided in the past to put effort in these connections, commit to them fully.
  • Then, audit/repairs in the incoming order (FIFO/oldest first), but allocating only a small amount of bandwidth to them and only if they’re already waiting for at least, let say, 30 seconds. These connections are not latency-sensitive, so defering them should not impact customer experience.
  • Then, pending non-audit/non-repair egress requests in the order opposite to incoming (LIFO/most recent first). If we need to decide to start data transfer and commit to it, the most fresh requests are the ones most likely not to be tail-cancelled.
  • Then, pending audit/repairs without the 30 second delay. If there’s still free bandwidth, let them happen quickly to free up bandwidth for future customer traffic spikes.

This way the started transfers would get finished potentially as quickly as without shaping (being picked up shortly after being initiated and without having to compete with many other connections), so it would be a win at least to some of the transfers. And the egress requests that keep being postponed by newer requests—well, they’ll get more likely to be tail-cancelled, so at least the transfer will not induce the local disk I/O and data transfer overhead for it.

Also, instead of keeping a traffic shaping counter within the storage node itself, the storage node would observe the operating system counters for the network interface, like /proc/net/dev. This way shaping would be shared across all storage nodes operating on the device, and as a nice side effect, if the operator performs other, non-Storj data transfers from that device, the storage node will make space for them in a natural way, not impacting the device’s main purpose. The storage node could assume that the minimum T&C-mandated amount of 5Mbps is always safe, of course.

What do you think?

if that is the case, then ill say…
the line to soak up the egress you don’t want starts behind me… lol

egress is the whole point in hosting a node and what really pays, have you considered a faster internet connection?

sure the 30mbit down and 5 mbit up that storj proclaims might be a bit off the mark.
in time any storagenode will use more upload than download, depends on the size on the node… the larger the node the more egress.

if you want to stop adding egress, i think the only viable current option is to limit your storagenode to its current size.
but it’s very possible that the node traffic would pay for the extra internet bandwidth and then some… ofc depends a lot of where you are located in the world, not everyone can get high speed fiber.

so there might be some hardware limitations.
still having to much egress is best case…

but yeah i suppose given enough egress it could be detrimental for your egress successrates and traffic shaping would be similar…

really the only options would be to limit the size of the node or upgrade your internet bandwidth.

IMHO YMMV LOL

1 Like

Managing bandwidth with any conventional QoS solution will be counterproductive.

Read about SQM (such as fq_CoDel) and enable it on your upstream. If your gateway does not support it — you can replace it with ones that do (ubiquiti, untangle, OpenWRT, etc)

With proper queue management fully saturating your upstream will have no effect on your other internet activities.

2 Likes

It doesn’t look like it would solve the problem I mentioned:

The post is exactly about making the shaping not fair and prefer specific connections over others.

1 Like

out of interest, how much Mbps is your node using on average 24 hours ?

The whole box (many nodes) usually produces around 10Mbps of egress, but recently I see periods of sustained 30Mbps egrees and even higher short-term peaks.

1 Like

so you will get more money.

1 Like

wow thats huge egress… in my eyes :slight_smile:

do you run multiple nodes from 1 IP ?

Yep. 3 drives used by Storj right now. Far from @Vadim’s scale, obviously, but I already like it.

1 Like

Good point.

I think there are two separate aspects of the problem:

  • node egress interfering with the other home traffic
  • node egress interfering with themselves (30 concurrent ones will be too slow, while if it dropped 25 the remaining 5 would complete quickly and successfully)

The first issue can be fixed with fair queuing — home traffic is bursty in nature and will have very little impact on storj egress while benefiting from ”fair” latency.

The second one is more complex. The approach you are describing will deprioritize low priority traffic, which is Ok, but it does very little if there is a lot of active connections (“transfers that have already started”) — and this is not (nor should be) under individual node’s control. Furthermore, the repair and audit traffic is much smaller in volume, I’d think even completely deprioritizing it will have little impact on anything.

Deprioritizing some of the active connections on the node could be detrimental to the network as a whole — because node had no idea which or those active requests are more important than others.

Maybe satellites shall can keep information on each nodes performance (self-reported or measured or both) and communicate that metadata to clients.

Then clients and satellites could assign some sort of QoS priority to requests — such as “must send” and “best effort”. That way nodes can stay within their “best performance” envelope for “must send” requests and still process data under high network load if needed.

I’m not sure how feasible this is, as it requires node load tracking and communication, and from what I understand, today this is not done, instead requests are blasted to many nodes, and some requests are canceled/discard in the end. This makes overloaded nodes even slower and less useful.

Maybe simply dividing redundant requests into two QoS categories will be enough to solve this on the network level, perhaps with ratios automatically tuned in some way based on network load…

1 Like

very nice !

I’m using Raid 5 for my drives to provide parity and only have 2 nodes currently but will be getting another 3 x 2TB 2.5" for another node

I just checked mine - Can definitely see a bit of a jump in “ingress”
image

I get no where near 30Mbit/s

If you are installing without docker you can use wondershaper (on linux)

ste

Won’t help with:؜؜؜؜؜