Updates on Test Data

They probably want the (soon to arrive?) capacity-reservation data from SLC… just not the performance-testing bits?

2 Likes

Genius!
(20 characters worth of genius, too) :smiley:

This will be data with short lifetime. So it doesn’t really matter.

As I get older I realise that “young” and “old” become rather blurred concepts :wink:

(But touché! I suppose the answer is somewhere in the middle. They’ve been around a while and understand this form of distributed storage better than anyone but are also still learning and perfecting a new technology)

1 Like

They didn’t say the lifetime will be ‘short’ for capacity-reservation: just that a time-to-live will be specified: so it will eventually be deleted. My guess is if they’re planning capacity at least 3 months in advance: that the TTL would have a similar duration.

All a TTL means is that Storj will need to keep uploading as fast as data expires… if they want that capacity to remain reserved. If they decide they want to keep a spare 5PB online… I want to get a part of that action! :money_mouth_face:

1 Like

The load wouldn’t be constant but as far as I know we will hit the rate we are currently testing at least for lets say a few minutes / hours every day.

There are 2 factors that will change the rate you see on your node.

  1. The number of nodes with free space might increase after the nodes clean up the trash folder this weekend. That will reduce the load for your node.
  2. Our calculation might contain some mistakes here and there. I checked the numbers 3 times already and they look good to me. What I can’t check are the assumptions that have been made up front. If the files that later get uploaded are a bit smaller than in our spreadsheet the rate will be lower as well.

So at the end this estimation is as accurate as other estimations.

5 Likes

Nothing wrong with being greedy. :wink:

But they pay for the capacity-reservation data, so I guess it will be not that much data. And in the long run earnings from SLC is peanuts, not worth the trouble.

@raert Thank you for explaining RS numbers. I was going to ask their meaning. Now I understand what they are talking about.
So, I understand that the new RS numbers will be smaller that the actual numbers, decreasing the expansion factor?
That means Storj gets more proffit.
That means Storj can pay us more? :thinking:

Your complaint would make sense only if the same people used both arguments. It falls apart the moment it turns out different people have different opinions.

4 Likes

Just had one heck of a spike on my nodes. Lots of bandwidth used but CPU, IOWait and IOPS were not a problem at all.
Whatever you’ve done today seemed to be less painful this end. :slight_smile:

5 Likes

Even higher peak this time. :+1:
image

Guess it’s time to go swap the 10 Gbit/s nic for a 25 Gbit/s now :slight_smile:

Th3Van.dk

4 Likes

Peaked well over 500mbit for me. Glad I have 1gbit now.

4 Likes

Off-topic: But I continue to be thankful for how cheap 10G has become for homelab use. Like $20, seriously?

3 Likes

Was that for one node/subnet? I only got about 100mbps (1 minute average). Still, about 250 requests per second.

Even 40GbE is bargain bin at this point, and 25 and 100GbE are fast approaching.

The performance test data has a TTL, so it will not go to the trash, it will be deleted by the node, unless you recreated databases. But the older test/abandoned data will go to the trash first.

Not at all. The latest updates allows to select even slow nodes more often and keep them busy without an overload.
See

At least pi5 is confirmed:

You are paid for that data too, it doesn’t differ from the customers data, and we expect that we emulate the same load from these customers.

it’s not, if you do not limit it below the minimum requirement.

6, so that tracks with your stats.