So the last two days i had a massive decrease in bandwidth usage (arround 50%) without changing anything to my node. I saved the logs of the last day in a t1 file (13-14/1) and this is what i saw:
cat t1 |wc -l
128347
cat t1|grep ERROR|wc -l
57849 #45% of logs are errors
cat t1|grep ERROR|grep "too many"|wc -l
57434 #almost all of them are caused from the limit of concurrent requests
And now i am in the same situation that i was in my previous post , with the exception that now my disk can’t handle any further increase in the concurrent requests. Having in mind that for the last 10 days my node was working fine: This proves that now requests are really bad distributed over time which means my node is accepting lot of requests in the same time and then no requests for some time…(right?). Also my disk usage has decreased from 0.8 to 0.78TB which is something that took a LONG time to happen (about 2 weeks) and now it was erased in matter of 2 days. My question is:
1)Why do i even accept so many concurrent requests, while i have set a limit
2)What is the real reason bandwidth dropped (if not the distribution of errors?)
3)What else i can do?
Verify that your auto updater it is actually updating and that you are currently running the current version. This happened to me last month and I had a hell of a time figuring it.
My bandwidth has been pretty steady and my disk usage is actually up over the last two weeks.
Data is back on the rise, so I think the data purge is now over. Looks like 140TB delete over last 9 days. My Trash ballooned from a normal 45GB to 90GB and is slowly reducing after the 7 day deletion retention