Online score won't go up after limiting concurrent requests

As a continuation from that post , i finally added this to the config.yaml file : storage2.max-concurrent-requests: 5 .
For the last 6 months i thought everything was fine will i realized that online score is sticking at around 95% which means i will never get a payout…

In command line i checked the final 100.000 logs and calculated those with errors simply by:

sudo docker logs storagenode --tail 100000 &> t1
cat t1|grep ERROR |wc -l

which resulted excactly: 13.216 which is 13.216% errors which means my online score is just a bit lower than 87% !!!

EDIT:
Considering those commands with there outputs:

cat t1|grep ERROR|grep ": 6"|cut -d "Z" -f2 |uniq
	ERROR	piecestore	upload rejected, too many requests	{"live requests": 6, "requestLimit": 5}

cat t1|grep ERROR|grep ": 6"|cut -d "Z" -f2 |wc -l
11341

shows that 11341 errors(11.34% of the errors) are from requests that accepted 5 from 6 request which means if i was able to accept them i would only miss less than 2% of total request which may give me my first payment…
Should i try this?

Do i have any chance to increase it and get the payout? Is there anything else i can do?

Online score isn’t related to receiving payouts.
If you don’t get paid, that’s more likely because you didn’t opt-in for zksync and didn’t reach the minimum payment threshold.

The online score is a 30 days moving average that shouldn’t impact your node as long as it stays above 60%.

Log entries reported as errors are not necessarily all concerning, and are certainly not all related to online scores.

As long as your node sends and receives data, has audit scores above 95% and accumulate a bit of revenue month after month, I don’t think there’s anything wrong with it. You may wanna check how fast it fills up with @BrightSilence’s estimator as a reference, but your mileage may vary:

This will most probably lower your income. Unless you absolutely need this setting (if you have SMR disks that cannot keep up for instance), I wouldn’t recommend it. But in all honesty I did not read your linked post so I may overlook something ^^’

3 Likes

just increase the max concurrent, you just want it to be low enough that it keeps the hdd from choking, in the rare cases when traffic goes to fast.

try it at 10 and see if it still does it’s job…
also keep in mind many uploads and downloads can take a few seconds, so its very easy to get overlaps…

5 seems very low.

i wouldn’t be surprised if you could run 20 or more without to much trouble…
don’t go past 40 tho… 40 is about the lowest IOPS a SMR HDD can do.
sure, IOPS isn’t the same as max concurrent’s, so it might even be able to be set higher…
but i seriously doubt the result’s will be good nor that you will actually benefit from the max concurrent being set at that point.

but you will only really know through testing or if somebody have already done extensive tests on various SMR HDD’s with different max concurrent settings.

and it would almost be unique to each case as even the data currently stored on the HDD will affect its performance, as the heads moves in towards the center and the reads / writes slow down because of less plater passing below the heads over the same time…

the best way to use SMR HDD’s is when the node on it is full…
because SMR HDD’s are usually slow at writes, so when its full there is minimal writes and it will behave like a regular CMR HDD.

online score shouldn’t be affected by performance, unless if its consistent and bad over a long time… because it will basically end up rejecting audits, which is what the online score is counted by… so i think it’s possible that your online score isn’t 100% due to your max concurrent…

you should increase max concurrent until you get 100% online score… or run into other issues :smiley:

I don’t quite see the relationship between this setting and the online score. :thinking:
Lowering the max-concurrent setting might cause the node to reject some ingress queries, but it shouldn’t make it reject any audit requests.
If the node cannot reply to audits, then the audit score could drop (or the suspension score), not the online score?

1 Like

i duno the details, i just assumed max concurrent is a fixed max, why would audits go through that… they are essentially just downloads that are being verified / checksummed.
and those same audits is what counts as online score…

but i may be mistaken…

i suppose it could be that max concurrent only affected uploads, but then downloads would still be able to overwhelm storage and the variable of max concurrent would have less meaning…

audit score is only affected if a piece is failed multiple times, when an audit is attempted it will be reattempted … maybe multiple times but atleast once over like 20 or 40 minutes…
and if it is again failed the audit fails and audit score drops…

the online score is affected by single audit failures.

Ok thanks a lot for detailed answer i just cleared a lot of stuff in my head…
But even if i enable zksync, i will pay eth high gas fees to send the tokens from zksyncs to the exchange right?

Ok thanks.
I actually did increase the concurrent requests by just one and it seems again like the disk is having a hard time… There’s no way it can handle more than 7 concurrent tasks.
Also now i am getting lot’s of EOF errors

so i increased max concurrent AND run into other issues concurrently :frowning:

end of file is network related i believe, from the customer end.

how do you define the disk is having a hard time, does the latency go up or ?

For now it just takes longer when i open the diagnoser page and RAM usage is increasing (but this isn’t important because i have more than enough and i have also a swap file)

This is how problems started next time but it kept getting worse and then audits and online score dropped but i don’t think i will face the same problems now but i just think i can’t increase the max.concurrent anymore

okay well then you have another option, you could add another storagenode.
because data distribution is IP based multiple nodes share ingress and thus the writes to that particular storagenode would halved.

only other option i think.

else you simply have to abandon using that particular HDD and get one thats CMR instead.
some SMR’s are just terrible.
SMR is good for regular home storage, with low or mainly sequential writes…
you also might be able to get some kind of software to add a SSD write cache.

there is also an option for moving the database to another drive, which should help reduce random iops on the SMR drive, i think there are a lot of people using that.

That’s right. But as long as your tokens accumulate to your zksync “account”, you get a 10% bonus from StorjLabs, so hopefully by accumulating long enough, this extra might pay for the ETH fees down the line. No guarantees, but that’s what some of us are counting on… :wink:

Just something to keep in mind: Whenever you change an option on your node and restart it, it reruns the file browsing routine which scans ALL the files stored by your node. Depending on the amont you’re storing and your disk performance, this may take hours (it takes 30+ hours on one of mines).
During this, your disk I/O is going to be like all over the place but that’s expected.
The important thing is that the disk can keep up after the scanning is done.

I believe that wouldn’t change much until the second node is vetted (which could take months), as this new node would receive the share of data dedicated to vetting nodes. But could still be a good idea to “incubate” a second node. When it’s vetted in the future, it will ease the load on both nodes.

If the issues @YourHelper1 is facing are because of an SMR drive, that I think is the ultimate good solution indeed :+1: :slight_smile:
But… you need a CMR disk for that. If the SMR disk is the only one they have available for Storj, they might as well keep going with it, even if it means the node is gonna be struggling a bit more.
That’s my take anyway, and what I’m doing :slight_smile:

3 Likes

i’ve tested this particular case to try and get more ingress, but data allocation so far as i could tell and remember is that it was purely IP based, the number of nodes or vetting nodes doesn’t affect it at all, the grand total will be the same…

but aside from that yeah vetting nodes does take ages, infact vetting seems to slow down depending on the number of other nodes.
so like if you vet 1 nodes with 1 fully vetted node, the vetting will take about twice as long.
basically one could estimate it i think by saying vetting time is multiplied by the number of nodes on an IP/24 subnet.

if it was me i think i would try the database thing off the bat, because its the simplest least expensive approach.

You can pay the fee directly with STORJ token. The first transaction is still quite expensive. The amount depends on the current exchange rate and the gas price. My plan is to unlock my zkSync account right at the moment both variables are good for me.

Ok thanks for the advice but first of all: How can i find those CMR disks i mean there is no info about that in the disks than i can buy from here…
And the most important is that i can’t buy another disk now :frowning:
I mean i have already gave money for that disk that i will need at least another year to get back so this is not solution right now but a good advice for the future

This isn’t about the node it’s about the disk so yeah @Pac is right, adding a node won’t change anything and this is also something that i can’t afford …

@littleskunk not gonna lie but i didn’t get anything from what you said except the payment with storj. When you say quite expensive what do you mean and why only the fisrt transaction. Actually in binance i saw that transaction fee for storj is 15STORJ right now which is like hell if you don’t have 10+ nodes and i only have one. Also how do you unlock the zksync? I mean i know how to set it up but is there an option to not receive payout? And as i know payouts happen once a month right so how can you time it when both variables are good???

Hello maybe @Pac and you as more experienced users could share some possibilities? Any other external disks we could use for a simple pi set up?

This thread has quite a lot of info on this matter, you might want to check it out:

I believe more and more manufacturers now give this info: When buying a disk, it’s usually indicated these days. But back then, yeah… that was quite difficult to know whether a disk was SMR or CMR.

I’m no disk expert but from my experience, if you stick with recommandations like having one node per disk, any CMR disk will do really. Even 2.5" disks (as long as they are CMR) can cope with Storj activity without problem (even during extensive tests Storj was carrying out at some point in the past: 200GB/day for a short period: my 2.5" CMR was just fine, and all my SMR disks stalled and failed miserably :wink:).

Just be sure to have enough power when plugging a disk to a RPi as these minicomputers are known to provide weak power to USB devices:

  • If you’re using 3.5" disks, then they will come with their own power supply so it should be fine.
  • If you’re using 2.5" disks, then you need to make they have their own power supply, or that you use a powered USB HUB that can provide enough power for all the disks you wanna plug to it.

Also, you want the USB HUB to have physical switches, so it stays ON in case of power outage. For example, I use this one for powering 3x 2.5" disks and the RPi 4B itself:
https://www.amazon.co.uk/gp/product/B07N5FTPRM/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
(This is for information only, I’m not saying you should buy this, and this link is not an affiliate link)

5 Likes