Drpc stream terminated by sending error during download

This error happens with files that repeat download. Prior to 24.5 the file download was fine every time. In the beginning certain files that repeat downloads worked fine, but recently I get the drpc stream terminated by sending error during download.

What is the problem or is this standard testing. Never saw it before this most recent release. does it affect the node at all?

Yes, this is normal for the time-being.

We are testing and implementing a more robust and less resource-hungry communication. The errors are to be expected when either side unexpectedly terminates the connection (e.g. uplink reaches the necessary threshold or network drops or some firewall cuts the connection).

Unfortunately, we haven’t yet managed to make the errors more descriptive.

1 Like

hi there,
near half of my downloads failed.

that’s my error message:
“GET”, “error”: “piecestore: piecestore protocol: drpc: stream terminated by sending error”, “errorVerbose”: “piecestore: piecestore protocol: drpc: stream terminated by sending error\n\tstorj.io/drpc/drpcstream.(*Stream).SendError:261\n\tstorj.io/drpc/drpcmanager.(*Manager).manageStream:224”}

========== DOWNLOAD ==========
Successful: 64598
Failed: 30145
Success Rate: 68.182%

---- some days ago everything was allright.
if somebody has help thanks…
best m

maybe this (config.yaml):

how many concurrent requests are allowed, before uploads are rejected.

storage2.max-concurrent-requests: 500

This is for uploads (and not needed anymore), while your problem is with downloads.
Maybe your node is too slow to handle all download requests or there is too much other load or too slow internet?

500 was a ridiculous number to set to begin with, but as posted elsewhere, you should remove this setting now. With the new drpc implementation it’s unlimited by default and that should be just fine for almost all implementations.

1 Like

Look into my post here, I think something go wrong…

Also, I tried set storage2.max-concurrent-requests: 0 (unlimited like on git commit) but it also not working, so only way at this moment is set very high number of concurrent connections.

I saw that but thought that was prior to v0.25.1. I see now that you confirmed it persists. I still maintain that any setting over 50 is pretty crazy. If you still hit the limits at 50 it’s probably a bad idea to raise it further and lowering might actually be a better idea to get fewer failed transfers.

I completely agree with you.
My proposition set very high numbers basen only on reccomendation in changelog:

So I reported than this change is not working and we still have a limit on v.0.25.1

1 Like