Iām almost caught up with GCs and trash deletion, things have been speeding up on my end as they get done and more races end up being won so Iāve been adding a lot more data as well.
HDDs are fine. But my caching SSDs should be, like, 20Ć faster. With this absolutely ancient HBA from the stone age they barely do 2k IOPS. And Iāve already spent too much time trying to fit (physically!) this HBA into the box, anything more modern has bigger dimensions.
Yeah, fun story. I think I maxed out what I could do with this box without frying the contents.
I think this is the default now? Not sure. Anyway, itās not me complaining, Iām doing his for years. Just pointing out the value of fast enough file walkers.
Blockquote
Since November, you have doubled it and maintained around 300 Mbps until May when it skyrocketed to 1000 Mbps. Now, after upgrading to 10 Gbps, you have maintained a 95th percentile at 2500 Mbps. What are your plans moving forward as we need to find a solution for the large amount of traffic you are utilizing?
Sounds like this will cost alot maybe time to call quit.
So you paid to upgrade to 10Gbpsā¦ and they have an issue with sustaining 25% of that? Maybe turn it around:
"I havenāt received any notice of outages or maintenance in my area. If 10G plans are currently degraded: how soon will you be able to restore full service, and whatās a reasonable speed I could expect while you make the repairs?
Iād be happy adjust usage until you restore capacity. You provide a great service that I hope to use for a long time - thanks for keeping me in the loop on this issue."
Just noticed, connection latency from outside to the affected nodes is astronomical, thousands of milliseconds. Iām wondering if this could be the culprit of the deteriorating Audit score.
These are the nodes I moved from Oracle VPS to AirVPN recently.
Iāll have to look at alternative AirVPN endpoints. The weird thing is ā ping to the vpn endpoint from node is very small, under 20ms, but reaching out to the node from outside (probing port or reading HTTP) takes forever.
It is not related to the pause in the ingress testing ā the massive ingress stopped at 6:40PM, and yet there is still high latency crap after that, until I switched to OpenVPN.
(Iāll try messing with MTU on Wireguard ā but Iād assume the default configuration that AirVPN recommends shall work)
Separate question, is this an expected outcome of the node experiencing multi-second latency for a couple of days?
Usually only the online score should be affected.
However, if your node cannot provide a piece for audit after 3x attempts with 5 minutes timeout - this audit will be considered as failed.
But I also see that the suspension score is affected too - this is mean that your node responded with unknown error on audit request.
I would suggest to check your logs for errors related to GET_AUDIT/GET_REPAIR to see whatās going on. Unfortunately the node wouldnāt detect failures because of timeouts, but you may try to calculate the time between start and finish for the same piece.
IN the meantime, Iām convinced, AirVPN crumbles under load, both with OpenVPN and Wireguard. OpenVPN was a bit better.
So, right now I"m routing traffic to those nodes through my home connection, (Iām the vpn), all traffic will be shared, but at least nodes will survive until I find better VPN.
If ISP starts throttling our connections, Storj would be in a big problem, and the wide distribution wil not be so wide anymore.
And how exactely this throttling would work with Storj? If they limit bandwidth, the node will pretty much loose almost all races, making it not interesting for SNO, and the node will be shut down.
If they limit the number of connections, the node will loose audits too, and get DQed.
I have nodes in 6 locations. In 5 of them, there is only one ISP in the area. So I canāt switch to anyone else.
In the 6th location there are 2 ISP. The first one and another. Iām currently on the other, but if the first one takes my nodes off, I have to put all of them in the 6th location, making the second ISP the only choise, untill he decides to killāem all too.
I donāt know how this professional usecase will fit with the home grade nodes (setups, internet service) in the long run, but it becomes more challanging everyday.
As far as I understand, if the node starts to lose races, the satellite will reduce the amount of traffic to these throttled nodes and will select others, which are not throttled.
It wonāt lose any more races than it would if you had a lower bandwidth service from your ISP.
Also, as @Alexey mentioned, I believe the new node selection process means the satellite selects your nodes a bit less frequently until it starts winning races again.
So although I suspect your Storj throughput would drop (intended result), the success rate wouldnāt drop too much.
The alternative is shutting down the servers completely, so I suppose this is the least of two evilsā¦
did they throttle 10-15 years ago when everyone was torrenting illegally movies and games like CRAZY? spoiler alert: No, everyone was torrenting movies and games at full speed, 24/7 and nothing happened, and storj donāt even use Your full speed at all times, just saying!
There are like uniqe ~11000 SNOs all over the world,
currently the case is like non existing for ISPs,
i doubt it will even be, if that rises x10, x100!
will be nothing compared to how big torrenting mania was.
And even IF, its what i have been saying: storj need more SMALL SNOs, BIG SNOs with 10Gbps, looks like got problem first, as flwstern shows. Good. We dont need whale SNOs on machines rented in datacenters, we need MORE smaller SNOs, with home fiber, that noone cares about.