"upload rejected, too many requests" log level change

ERROR piecestore upload rejected, too many requests {"process": "storagenode", "live requests": 7, "requestLimit": 5}

tl;dr
Devs, please set log-level of this to WARN, not ERROR.

Well, I have slow SMR disk, to the point that disk usage is almost 100%,it started when disk became more than half filled. So I’ve set storage2.max-concurrent-requests: 5 in config.
Works more stable. Great. Sometimes I see it goes up to 50-100, and my system IO is safer.
But now I have my logs filled with ERROR lines like at top of post. Setting log.level to ERROR doesn’t help at all, as this is error level log. But IMO it isn’t error, it is just notification of too many requests and should be in WARN level. It will help a lot to people who can set their config and don’t want to have this kind of messages when they set their error.level to minimum.

I don’t know if this is correct board, if not please move it where it belongs :slight_smile:

It’s an error, because the customer got the same, and they are suffer because of your setting - they even can finish with the inability to download their file. I do not agree, that it should be a warn, since it’s affects customers.

What do you mean by

because the customer got the same

Is it not the case that few nodes are ready to serve every request, and my node just says ‘hey, me not ready, brother nodes you can have it’? And customer just always gets file uploaded/downloaded?

Well, from clear description of that setting:

# how many concurrent requests are allowed, before uploads are rejected. 0 represents unlimited.
# storage2.max-concurrent-requests: 0

My understanding is that:

  1. if affects only uploads (customer → my node)
  2. my node isn’t only one to serve
  3. if my node doesn’t want new piece, there are thousands of other nodes that want new piece, and satellites rule and divide on that, so it’s win-win, less load on my disk, piece finds new home.

Log triggers only with manual setting. Also… default to 0 means no overload protection, nodes on crap hardware can be DoS-ed to death, with no chance to serve dozens of concurrent requests.
And what happens with default setting? No error log for node. Since node can’t serve so many requests anyway, what happens? Customer still gets some error and node is DoS-ed?

Still IMO, that log level should be WARN to node operator that node is getting overloaded with requests above what node operator set.

I can understand that customer is more important as it generates income. And I don’t have much hope for log level to change. But without nodes and their operators there are no customers :slight_smile:

Correct me if I’m wrong please.

It affects any interaction with your node, it will return the message

error: the node is overloaded

to the customer. And if the number of normal nodes below the minimum (29), they will get an error to download a file.
The same will happen on upload.

it’s mean that your node doesn’t reject new requests to serve uploads or downloads after a certain number of connections.
This is not overload protection, this is limiting abilities.

requests from the customers are legitim, this is not DoS requests, however, this option designed for a weak hardware to limit the number of served requests, even if that affects customers. The problem will start, when more than 29 will return the same answer that they are overloaded…
So, in general you shouldn’t limit this, unless your hardware is not capable to serve many parallel requests.

For a better understanding, please describe hard and software as good as possible.

Maybe we can help in other ways.

Sure, why not, thanks.
Node works on latest armbian with 6.1.63-current-odroidxu4 kernel, on odroid hc2 with 6TB SMR drive. BTW, I can recommend this odroid with case if you still can find it, ideal for single disk storj node. I regret I didn’t buy few.
There are good and bad news too.
Good news are that this is temporary disk.
Bad news are that new disk for this node is used in raid and serving other node which I wait to shrink from 12TB to under 8TB. So far used space decreased about 1TB in 3 months :confused:

edit: only problem here is too slow SMR disk.

This should be a warning (at most) for node operators.

Errors in logs should refer to problems with node operation that are actionable to the operator. Uploads rejected are a result of a deliberate choice driven by limitations of a setup.

If you do not want nodes hosted on SMR drives, please state so.

2 Likes