Storj bandwith throughput

Hey there,

I tried asking this previously but it wasn’t properly formulated,
I noticed the banwith throughut has been capped at around 12 mb/s|100 mbit/s, I would like to know if i can change a setting to up this speed limit, Since i have gbit i would happily use a higher percentage of the connection. Furthermore i red the TOS Terms of service but couldn’t find anything related to vpn/proxy usage, Would it be acceptable to split the nodes into vpns/proxy’s (100mbit per ip) so i can get up to 500 mbit (5 nodes)

Best regards

The last 3 days I’m also hovering around the 100mbit mark. The days before that I’ve seen load up to 160mbit. I guess this is the current load that the network needs. I’m not worrying about it, but I do think my latency is a bit higher because I’m further away from the SLC test data that is currently uploaded.
I cant help you with VPS or proxy setups. Personally I think it adds unnecessary complexity and extra parts that could go wrong. It also sounds like it could undermine the resiliency of the storage network if something goes wrong. But maybe its just me, but I’m curious about other responses

For reference, the last 24 hours of incoming traffic. Don’t mind those drops in traffic where I was expanding my storage



Asfar as I remember using VPN to play around the IP limit is against the ToS, because storj wants to have the file pieces in 2x different location, so if some of them shuts down the data is still there.

However if you trick the system, with 30 vpn forexample in theory all the pieces could be at you and it is now 100% depends on you and your ISP

1 Like

I am getting a bit more traffic (single node):

Storj recently implemented a new node selection algorithm that reduces the traffic to a node if the node starts losing races (you can see it in action as the dip in my graph - node started garbage collection filewalker and temporarily slowed down).

So, it may be that you can get a bit more traffic if your node was faster (or you had a second node on the same IP, but different drive)

And yes, it is against ToS to use a VPN to bypass the /24 rule. AFAIK in theory separate /24s have to be separate physical locations. If I managed to get multiple IPs (from different /24) from my ISP or just used multiple internet connections it would likely still be against ToS to run multiple nodes on them (if the power fails and my generator doesn’t start all of those nodes go down at the same time, possibly limiting availability of customer data).

1 Like

There is no cap on bandwidth. The usage depends on the customers, not hardware of software settings (except some edge cases like using a network storages, complex setups and so on).

I’m sorry, but you are not allowed to bypass a /24 Storj network safety check. It’s here for a reason to do not allow to store more than a one piece of the same segment in the same physical location.
If you would bypass this rule, you will change the default behavior of the Storj network (this is forbidden by ToS by the way) and it become possible to store more than a one piece of the same segment on the same hardware and/or in the same physical location. If your setup would be huge enough there is a higher risk to have an unrepairable segment if your hardware/software/internet would fail.
The lost segment = the lost customer = the lost payment = the lost payouts for everyone.
So bypassing the /24 rule you are shooting yourself in the foot. Please do not do this. The reducing reliability can damage the hardly earned trust of the customers.

And @Pentium100 is correct too - the new algorithm of nodes selection is now more dynamically regulates which nodes are more often selected, see:

So I would suggest to check your success rate instead. If you have many loosed races (context canceled), then perhaps your disks are a bottleneck, not your network.
It is also worth to check the router and/or modem - perhaps they cannot handle a lot of parallel connections. You may also have a enabled “smart” protection or similar slow down features or not properly configured QoS.

As far as I understand, with the new node selection algorithm, nodes that lose a few races one after another immediately get lower traffic so the overall success rate remains high.

So, at least to me, there currently seems to be no way to now if my node is too slow or if it’s getting the max traffic that is available for one node. In case of the various load spikes and slowdowns it is visible, but not as steady-state.

Right now I have this:

Does it mean that there is just not enough data being uploaded to get more traffic or does the 99.98% success rate mean that my node is getting reduced traffic?

1 Like

You are correct. But they should have “context canceled” errors on uploads anyway. So it may help to identify that behavior, but they also should have a monitoring of the traffic like you have.
Then you may compare, when spikes goes down and are they correlates with the rate of “context canceled” errors about the same time?

By the way I can see a steady increase of the used bandwidth on your graph, just the delta is not so big.

Large dips - yeah, those are more easily visible and the reason for them can be found in other graph (load spike due to garbage collector etc). But looking at a steady state graph, it is more difficult to determine whether the traffic is fluctuating a bit because it just is (customers and the test data generator probably does not upload data at a perfectly constant speed) or is it because of the node selection that is reducing the traffic to my node. Putting it differently - if I moved the node to a super fast server with nvme drives for storage and 10G uplink - would the traffic be higher (and by how much) or the same?

I do not know, seems there is only one way to check…

Unfortunately I didn’t catch the fun, my nodes filled so quickly that I didn’t in time even to setup a Netdata…

Are there any plans to increase throughput, Im not able to expand my contribution to the network due to the current network limitation.

The plan is to get more customers. It all depends on customer uploads.

1 Like