You completely misunderstood the point of this configuration.
It is used to keep the concurrent requests within a range that your device can handle so it is able to finish the requests before getting new ones. It will otherwise easily get overwhelmed.
The recommended value is maybe 7 for a Raspberry Pi and 20 for a decent PC.
You have to find out by yourself how many uploads actually finish and how many get cancelled because your device wasn’t able to finish those.
Getting the error “piecestore upload rejected” is therefore totally fine and I wouldn’t recommend setting concurrent requests higher than 20 without knowing your hardware and the succesful/failed upload ratio.
Wow, I’ve never seen anyone actually reach thresholds that high. That’s not a good thing, your node is clearly not able to handle those requests as fast as they come in. You’re most likely also failing almost all uploads and a good chunk of downloads. This is exactly why this concurrency limit exists. To make sure your hardware isn’t overloaded. I created a post a while ago outlining how to find your perfect value. In your case there may even be a benefit of lowering it below the default.
It reads the whole log file. That’s why your successrate rate is very low.
Try removing the log file and wait another 24 hours. But try lowering concurrent requests to 7.
The acceptance rate should be >70% and the successrate too.
This is from one of my nodes with concurrent requests at 10 on a decent pc:
========== AUDIT =============
Successful: 8754
Recoverable failed: 17
Unrecoverable failed: 1
Success Rate Min: 99.795%
Success Rate Max: 99.989%
========== DOWNLOAD ==========
Successful: 78748
Failed: 2696
Success Rate: 96.690%
========== UPLOAD ============
Successful: 188014
Rejected: 987
Failed: 8531
Acceptance Rate: 99.478%
Success Rate: 95.660%
========== REPAIR DOWNLOAD ===
Successful: 0
Failed: 0
Success Rate: 0.000%
========== REPAIR UPLOAD =====
Successful: 0
Failed: 0
Success Rate: 0.000%
I don’t agree with this part. That is simply not always achievable depending on your hardware and connection. You’re aiming for the highest success rate possible on your node. You can raise concurrency a bit but only if it wouldn’t drop the success rate or acceptance rate.
@brizio71 can you tell us a little bit about the hardware you are using? What HDD are you using? How is it connected? What is your connection speed? Where are you roughly located?
I have a Ubuntu VM running on QNAP with Celern quadcore J1900 with all 4 core dedicated and with 4GB dedicated memory, RAID 5 disk connected in NFS, the connection speed is a fiber connetion to 1GB in donwload and 200MB in upload. I’m in Milan Italy.
Ok, a likely issue is your use of NFS. Are you running a VM on the same system? Does QNAP not offer native docker support? If not I suggest using iSCSI instead.
I’m fairly certain your hardware is capable of better performance than what you are seeing right now. And your connection and location are pretty ideal for current tests. But you probably really want to get rid of the use of NFS.
Have a look at container station, I think that might be your solution for native docker on QNAP.
Thanks for reporting back on that! I know people have been skeptical about this in the past but these numbers show a very clear difference.
As for increasing concurrency, right now it looks like you would not benefit from it as no uploads got rejected to begin with. Wait until your acceptance rate drops. If it does you can increase it slowly while keeping a close eye on the success rate. The danger of increasing it now is that when load on the network increases you might find out that your node can’t actually handle your higher setting and starts failing many more uploads. So increase slowly and only increase when you actually hit the limits and verify your node could still easily handle it.
There are not enough uploads at the moment. Keep the requests at 10 until you see a drop in acceptance rate while your success rate still stays close to 100%