High load after upgrade to v. 1.3.3

SMR drives might not do so great if filled to capacity, however issues like that shouldn’t show up just after an update… but i suppose they could…
i have noticed that if i have down time my ingress increases often greatly until i’m back in sync with the network… so if your updating is a bit slow, then maybe that could affect it…

i mean i had some issues the last couple of days and when i finally got back on the network i peaked at 110mbit ingress with an avg of nearly 5 MB/s
but i doubt that is it.

it seems to be a trend that the 1.3.3 upgrade has caused higher loads, which is to be expected sometimes when developing new programming…

SMR drives read just fine… so if you are at near max node capacity and leave a bit free for the disk not to get to cluttered / fragmented… remember it has to read and move blocks around just like an ssd, so you will give it a shit ton of extra work if filling it to 100%
i got no idea where the sweet spot is tho… with SSD’s they say 80% but with a SMR drive i would think the number would be higher… maybe 90 or even 95% is fine… you would often with in a few weeks feel if its having trouble, because it will get slower and slower at writing … tho reading should be just fine.

I would just adjust the max concurrent in the config yaml to like 10-20
might look a bit ugly when you boot the node, but it will keep the network from flooding your node with requests your system isn’t fast enough to answer anyways…

I run at 20 with 400mbit fiber, 48GB RAM dedicated SSD for my OS, another dedicated SSD handling SLOG and L2ARC, then i got 5 HDD in raidz1
it’s a monster that eats whatever the network throws at it… and still for whatever reason and because my local 1gbit network infrastructure, don’t like the strain, and i have other people using my network, so latency is a thing.
my system can keep up with the network, even if it doesn’t successfully manage to get every upload successfully 15% or so get cancelled, it rejects zero… when running at max concurrent 20
tho booting the node is f’kered because pings and cleaning orders counts as concurrent.

anyways it slows down my number of concurrent actions for the computer and for the network to keep everything running smoothly.

and i know not everybody will agree, looks at Brightsilence but i wouldn’t run without it… and i’ve turned it on and off like 15 times sometimes letting it be off for a few days of stable activity and then turning it on again… to me it makes a noticeable difference in my performance, latency and such.

tho now i got my system optimized to a T then i might be able to run it at 200+ again or infinite at 0
but like i noted earlier, its most likely my network which needs better gear… using old 1gbit ethernet able routers converted into switches… or i assume switches i suppose they could be hubs, maybe thats the issue… well the server connection hops through a couple of those before hitting the fiber.

anyways long story short limiting max concurrent has helped my node and infrastructure run smoothly

Add this to the end of your config.yaml
# Maximum number of simultaneous transfers
storage2.max-concurrent-requests: 7

SGC goes back to check if he can finally run infinite with his current rig

WELCOME to the network get spammed by 20 of so upload requests at once and cleanup and auth procedures my system was at 20% iowait!!! for the first 10 minutes… seems to be back down to near 0% now… but its isn’t a gentle start up… ridden hard and put up wet…