OK. Show your traffic pattern for several nodes

OK. Now for 1 hour lower the available space to half of your nodes (as if they were full) and then show the your traffic pattern
dont understand what do you mean?

and this is second half of my nodes, so other place with other connection
On fridey evening I moved 1 server with 8 nodes to second connection, this why there is as spike.
If you still donât understand then carry out the experiment I suggestedâŚ
A ok now I got, it, I whatch them all together, I do not count some individual nodes, sometime balancing them only.
I see that overall traffic on Vocation gone little bit down, but nodes filling faster.
You still donât understandâŚ
I donât understand you - do you have a denser upload?
I think just more of them ending up in HDD, if connection not overloaded.
It feels like the performance testing hasnât really tried to push our connections for about a week or so. I think weâre in the boring ânormalâ phase of 50%-performance capacity-reservation uploads. Weâll cap our month of TTL data in the next week or two then just coast⌠deleting as much as is uploaded every dayâŚ
I think the SNOs collectively deserve a pat on the back by storj for basically absorbing the entire upload testing with minimal(YMMV) issues all round, even proving the Cassandras with their âZOMG !!!111oneeleven all the nodes will be full in 48 hours and weâll not have anywhere else to upload everything!â visions, wrong.
I think âminimal issuesâ is a tad optimistic.
Significant performance issues have been definitely identified, and Iâm not sure weâve quite seen all the impacts yet.
But the important thing is that they were identified and Storj now know more about them.
Whole point of testing, isnât it? ![]()
They were talking about adding their own âsurge nodesâ. So we can not know how much was absorbed by SNOs.
100%, as far as I know no surge nodes were added.
That is correct. This morning we setup the first server as a testing ground. It isnât part of the node selection yet. That has to wait for some code changes that should get deployed tomorrow. In any case this is just a single server and shouldnât have a significant impact on your throughput. If we are lucky it will have an impact on our throughput. Not by much but maybe enough to calculate how many servers we would need in total.
For anyone who missed this topic including me, what are these servers?
This post and the next couple from littleskunk are a good overview. Basically the community may be OK providing raw capacity⌠but weâre not so great at throughput (because fast nodes get filled: but not expanded again). So Storj will run their own nodes to handle surges in uploads to keep customer speeds up.
Iâm fine with this: because itâs our own damn fault
. We complain about not filling fast enough: but thousands of nodes get filled and the SNOs operating them still donât bring extra capacity online⌠so the upload capacity of those nodes is lost.
(And inevitably: when surge nodes come online and more SNOs realize: theyâll come to the forum to cry. The same people who said they wonât expand if full. The same who say they havenât expanded even though theyâre full now. The same ones who are getting paid today for reserved-capacity TTL data⌠but who still say theyâre waiting for âreal customerâ data. Crocodile tears.)
Not so long ago there was another test. It was about how much payment could be reduced without losing too much SNOs. So it is arguable whose fault this is. ![]()
Could the RS numbers be tailored to each customer? I mean someone maybe needs more redundancy, someone less, but more throughput. Canât just make these numbers an user choise?