DEFAULT setting for "max-concurrent-requests"

Hi,

  1. If this setting is not managed by hand, what is the default setting for it? In the documentation it is mentioned number 7. Is this default value?

  2. What are the real possible ranges for this number? 15, 50, 500? I understand that raspberry vs fast raid server will have very different values (as well connection speed and other bottle necks will affect), but is there known approx ranges?

Lets say:

Pi3= 5-15
Pi4= 10-20
workstation= 10-50
i7+RAID= 50-100 or so ?

Thnx

1 Like
  1. Yes 7 is the default

  2. You are right, the number is depended on your hardware and internet speed. Also your node’s location to the uplink could help you lessen the failed upload/download messages in the log.

Instead of relying on others I would recommend to test out the numbers by yourself. You can edit the number and let it run for a while.

I have encountered at least 2 SNOs that had set their numbers in 4 digits, yeah 1000+ :crazy_face:

1 Like

My nodes worked just fine with a value of 10 but I’m running 2 nodes on the same device so maybe 20 would have been fine for one node.

Running 7 on a Pi3B+ with a storage board, but my connection is not great.
Doing 25 on an HP DC8000 with 100/100.
Upgrading the last one to a HP Microserver :wink:

I feel like I should mention this is never a good idea. If your node runs into the limits at say 50 concurrent requests, it’s because it can’t handle enough at the same time. Raising that limit at that point just makes it worse. But as mentioned before, it is way too dependent on individual setups, connections and locations to base it on someone else’s results. You have to test for yourself what your limit is.

I’ve monitoring my nodes with zabbix. All my nodes are on 300Mbps or higher connections. But i never seen more then 30Mbps in any of my nodes (and that’s is very very strange). While these days i see only up to ~2Mbps traffic.

So the follow question is:

Lets say I find that point where success rate is starting falling down on current traffic ~2Mbps, what happen if the speed will rise up, i believe it will not handle anymore the same amount of requests, because of the rising speed. Or I’m wrong?

1 Like

This is unrelated to your bandwidth. It’s related to the ability of your hardware to process so much requests.

1 Like

@Alexey But if speed is rising up, then requests are coming faster, means more requests per same amount of time, what needs more hardware resources.

How to view/set the connection count?

1 Like

BrightSilence, please tell me what value of for requests on what hardware you are using? :slight_smile:

@voltage Please look at the post above mine to see why I posted that link, you’re not the only person in this topic with questions.

In addition to that:

In my opinion it would be doing you a disservice to give you information that isn’t helpful to you. You really should find the correct number for yourself. Even if you have the exact same hardware, your location could still make your node behave differently from others. The post I linked gives you the best method of finding the correct setting for you.

I uderstand that the best values i will get by my self. But is this some kind of secret that everybody is hiding each from other?

I’m asking about range. it look’s like pi can’t handle more then 8, while Nerdatwork is using more then 1000, so what are those ranges? from 5 to 5000? or from 5 to 50?

I’m not asking to give me mine number. Show yours, so it would be easier to understand should i play with 12 to 18 or 850 to 1200?

I feel like I already gave an upper limit here. Theoretically there isn’t really an upper limit, but there is an upper limit to what’s reasonable. Though that’s obviously not a hard limit. With the current traffic however, I don’t think anyone should be using a higher number than 20 until there is more network load to actually test higher concurrency numbers. As for the low end, I’ve seen nodes that performed best with setting it to 3. But in theory 1 could even be useful in fringe scenarios.

It’s not a big secret, in fact I’ve mentioned my hardware and setting on other occasions and it led to people blindly copying my settings. I don’t mind telling you my current setting is 40, but I do mind people thinking that’s a reasonable setting to start with (it is most definitely not). I’m pretty sure that the only reason I can run with that setting is because I have a large read/write SSD cache and am very close to the uplink that’s doing most of the testing. Other specs in my system likely don’t bottleneck this. However, we haven’t seen traffic that actually tests this limit in months. So if you set it to 40 right now, you could think for months that you’re doing great, then when traffic starts to pick up your node starts suddenly failing many uploads and you’re back on the forum debugging your issue. Like with the people @nerdatwork mentioned who have theirs set to a ridiculous 1000+. (Btw, I’m also almost certain that @nerdatwork isn’t running at such a number as you suggested)

2 Likes

btw, you wouldn’t need an upper or lower limit if you follow the method as described in the post I linked as it makes clear that you should only be changing the number in small increments and describes exactly where to stop. If you’re looking for those limits you’re looking for a shortcut to that process that won’t work.
So let me be more clear, try 10 if you want to raise it. Then no more than 25% increments from there on out. Only raise the number if you hit the limit and checked that your success rate hasn’t dropped at the specific moments you did hit the limit.

1 Like

@BrightSilence thank you very much for such a detail explanation. As we forgot number 1000, and you on SSD cache using 40, now it’s much clearer situation.

As well i understand that i’m probably right about my earlier post talking about speeds 2Mbps/30Mbps and etc and @Alexey made mistake, saying, that increasing speed will not affect this.

I don’t want to be speaking for @Alexey but I think what he was saying is that the internet speed is not the bottleneck for a 300Mbps connection. Transfer speeds are a bit weird in that context, since most speeds you see are measured over a specific time span. Because these transfers are relatively small most of them take less than a second. So while those averages don’t top 30Mbps, this speed can still be used by the uplink if their connection is fast enough to support it as well.

As a result it can still impact failure rates and rejection rates, but above a certain value it’s highly unlikely to ever be the bottleneck. IO speed is much more likely to be the limiting factor or CPU on lower powered CPUs like with the rpi’s.

@BrightSilence I’m not about bottleneck. I’m about “max concurrent requests” and “download/upload speed” + hardware correlation.

Lets imagine never ending queue of requests and our value lets say 10 Max concurrent requests. Value 10, shows how much requests ant the same time my computer can accept. So if i have 2Mbps speed, those 10 requests will come slow way, lets say 2Mbps = 10 coming requests per second. And let’s say hardware able process 10 request per 0,1 second. So in this case, over the time of 1s. computer will receive 10 requests and will process all of them (10).

But if speed will raise up to 20Mbps (10 times faster), then over the same 1 second, i will get 100 requests. As process time for 10 requests is 0.1s my computer will receive and process 100 requests per 1s and will able to process all of them (100).

Now if the speed will raise up to 200Mbps (100 times faster), then over the same 1 second, i will get 1000 requests, but the hardware still will able to process 10 requests over 0.1s This means, the same 100 requests will proceed, but 900 - will not and success rate will be terrible.

Thinking like this, shows that increasing speed we will have to perform those same 10 requests faster and faster until we will reach bottleneck of CPU/RAM/IO.

So looking from this perspective, my questions was what will happend if now, when i see only 2Mbps speed, will adjust my Max requests, and once the speed will rise up, it looks like i have to lower this value down to prevent loosing sucess rate.

1 Like

Based on this thread and many others on the same subject - all of your points could and might be valid considerations. As well as @BrightSilence many good pointers.

However - bottomline is; At the current time, I will still be hard (read: impossible) to stress test most medium/fast nodes as the network load is just not high enough.

Just a few cents.

TLDR; You will have to wait :slight_smile:

1 Like

@voltage there are some really flawed assumptions in your description. Is your node even rejecting transfers at this moment? If not, then you don’t get any more requests by raising that number, end of story.

This isn’t mining, your capacity won’t be filled just because you raise it. The traffic on the network is traffic of real people storing and retrieving real data. If there is little demand, there will be little traffic.
The concurrent requests setting is about a trade off between rejecting requests and failing transfers because your node is overloaded. So in that way, the speeds you are seeing have nothing to do with this setting. This is also why this setting should ONLY be raised when your node is rejecting transfers and still failing <10% of them. This can only be tested when the network is under heavy load, which right now it is not.