Updates on Test Data

Also, consider the potential customer may be asking for evidence to show that the Storj network can handle the traffic based on the conditions they specify. They may be actively testing themselves and looking at the results.

7 Likes

how long is this test running approx?

2 Likes

I bet next week weā€™ll be testing AMD vs Intel performance.

1 Like

I guess many problems (high ram usage, crashes) could be avoided by just setting storage2.max-concurrent-requests to a reasonable value.

Littleskunk may be able to answer that, but when some of these customers sign on, the current conditions ā€œmayā€ be the new normal. If your nodes are restarting/crashing or the traffic load is too much for your current configuration you may want to work on addressing it now while these tests are ongoing.

6 Likes

There are only 2 reasons to stop the test.

  1. No deals are getting signed.
  2. Deals are getting signed and the test traffic gets replaced with real traffic.
10 Likes

Exactly.
Iā€™m grateful for these tests as this is a great opportunity to realise what works, what doesnā€™t and to fine tune the setup.
I just wish the days would be 48 hours long :neutral_face:.
And I also like to molest the ISP :slightly_smiling_face:.

10 Likes

Are you trying to say that this is the new normal? We have had enough bugs in the last few months that still havenā€™t been properly fixed and still cause a lot of problems, and now this?

Doesnā€™t the very first post in this thread, from a month ago, say ā€œWe donā€™t expect that this load is temporary, and may in fact be the new normal. Please take steps to correct issues your nodes may have keeping up with this loadā€¦ā€?

So, plenty of warning and no surprises?

9 Likes

I do not remember itā€¦ Could you please find an info?
Usually the high RAM usage is related to a slow disks. My docker for Windows nodes uses about 1-2GB (because disks are network connected, even if that is locally), the Windows service only 300MB.

I donā€™t understand the thinking behind requesting to slow the testing down. My thoughts are let my node roast. Push it to the limit. Move fast and break things. Weā€™ll be better off when the project is ready to intake the big players. Iā€™ll adjust and optimize.

12 Likes

My nodes are fine but you are saturating my bandwidth.

3 Likes

I amuses me that after months of complaining that we donā€™t have enough traffic, so many people are complaining thatā€¦ we have too much traffic!

Tweak your configurations to what suits your situation, reduce concurrency or bandwidth if you have none to spare or your router is struggling and be grateful that the traffic (and hopefully the money!) is flowing :slight_smile:

9 Likes

That is interesting.
Iā€™m seeing about 40Mbit download per IP (irrespective of the number of nodes behind it). This is more or less consistent across multiple ISPs in three separate European countries.

1 Like

Only about 0.5PB of (pre-expansion) data left to be uploaded, I donā€™t think we have anything more to worry. Should be done in a couple of weeks.

1 Like

Exact same feeling here.

Besides, Iā€™ve got some really crappy nodes even existing of some very doubtful functioning SMR drives, although with enough RAM (partially artificial expanded using zram-tools). But they are far from being roasted at the moment.

However nice to see my nodes expanded 1.5TB over 10 days now. Never seen such increase before. Although, just only 1/3th coming from SLC.

2 Likes

Shouldā€™ve used the months with low traffic for testing and code optimization.

3 Likes

Hi, sorry for join the party late
But is there any link/sum-up what we need to change to adapt with this?

I think it is fair to ask a recommendation config/tweak for node operator

1 Like

Heā€™s talking about test data that will be removed soon from nodes (ttl 30days?) with another big deletion effort.

2 Likes

Yeah, so, big deal?
Or am I really missing a point?
Itā€™s a nice treat for the network to see if done nodes get roasted or not.

Besides, only 1/3th in my case is test data.

1 Like