Updates on Test Data

OK. Show your traffic pattern for several nodes

Screenshot 2024-06-24 154544

OK. Now for 1 hour lower the available space to half of your nodes (as if they were full) and then show the your traffic pattern

dont understand what do you mean?

Screenshot 2024-06-24 155926
and this is second half of my nodes, so other place with other connection

On fridey evening I moved 1 server with 8 nodes to second connection, this why there is as spike.

If you still don’t understand then carry out the experiment I suggested…

A ok now I got, it, I whatch them all together, I do not count some individual nodes, sometime balancing them only.

I see that overall traffic on Vocation gone little bit down, but nodes filling faster.

You still don’t understand…

I don’t understand you - do you have a denser upload?

I think just more of them ending up in HDD, if connection not overloaded.

It feels like the performance testing hasn’t really tried to push our connections for about a week or so. I think we’re in the boring “normal” phase of 50%-performance capacity-reservation uploads. We’ll cap our month of TTL data in the next week or two then just coast… deleting as much as is uploaded every day…

I think the SNOs collectively deserve a pat on the back by storj for basically absorbing the entire upload testing with minimal(YMMV) issues all round, even proving the Cassandras with their “ZOMG !!!111oneeleven all the nodes will be full in 48 hours and we’ll not have anywhere else to upload everything!” visions, wrong.

1 Like

I think “minimal issues” is a tad optimistic.
Significant performance issues have been definitely identified, and I’m not sure we’ve quite seen all the impacts yet.
But the important thing is that they were identified and Storj now know more about them.

Whole point of testing, isn’t it? :slight_smile:

3 Likes

They were talking about adding their own ‘surge nodes’. So we can not know how much was absorbed by SNOs.

100%, as far as I know no surge nodes were added.

1 Like

That is correct. This morning we setup the first server as a testing ground. It isn’t part of the node selection yet. That has to wait for some code changes that should get deployed tomorrow. In any case this is just a single server and shouldn’t have a significant impact on your throughput. If we are lucky it will have an impact on our throughput. Not by much but maybe enough to calculate how many servers we would need in total.

3 Likes

For anyone who missed this topic including me, what are these servers?

This post and the next couple from littleskunk are a good overview. Basically the community may be OK providing raw capacity… but we’re not so great at throughput (because fast nodes get filled: but not expanded again). So Storj will run their own nodes to handle surges in uploads to keep customer speeds up.

I’m fine with this: because it’s our own damn fault :wink: . We complain about not filling fast enough: but thousands of nodes get filled and the SNOs operating them still don’t bring extra capacity online… so the upload capacity of those nodes is lost.

(And inevitably: when surge nodes come online and more SNOs realize: they’ll come to the forum to cry. The same people who said they won’t expand if full. The same who say they haven’t expanded even though they’re full now. The same ones who are getting paid today for reserved-capacity TTL data… but who still say they’re waiting for “real customer” data. Crocodile tears.)

1 Like

Not so long ago there was another test. It was about how much payment could be reduced without losing too much SNOs. So it is arguable whose fault this is. :wink:

3 Likes

Could the RS numbers be tailored to each customer? I mean someone maybe needs more redundancy, someone less, but more throughput. Can’t just make these numbers an user choise?