Updates on Test Data

Fair point, I have no idea how much either of those costs. :slight_smile:

I think he’s saying that capacity-reservation can solve the raw-space issue… but the current limitation is throughput: existing nodes have free space but can’t receive ingress fast enough?

So the request is maybe for us to have faster Internet plans (or non-SMR HDDs, or whatever can make writes slow). Like the request for “more nodes” is really “more network connections we can balance ingress across” (and more nodes, especially from new SNOs, could do that)?

1 Like

I don’t see that on my end. If I leave the nodes raw (ie on the internet line I have them on), I’m nowhere near at saturating my connection. If I route them through an EU-central-datacenter, I instantly saturate it and keep it saturated.

That, to me means that nodes closer to the satellite are selected more often. There is no logical scenario in my head that matches the traffic pattern. I can’t saturate my link, but routing half-way across the globe (+added vpn overhead) gets it saturated?

Capacity yes but we are concerned about throughput. Bringing online a 1 PB of storage on a 100MBit/s connection isn’t going to help. I think we filled like 3000 nodes in the past week. So if these nodes would add some extra hardware drives (if possible with their setup) that would get us back to the throughput we had a week ago.

That is an old statement. We now see that the number of nodes with free space needs to stay high enough. So any additional node on a different internet connection or additional space on full nodes helps.

Not useful is to add an additional 1 PB to the 100MBit/s node I mentioned above.

I can’t even predict for my own nodes how much used space I will have at the end. As explained a few posts earlier we are trying to reserve enough capacity in the network to answer that question. In the meantime we are preparing some alternative solutions to that problem. The surge capacity would be short term.

1 Like

Ah, I get it. I can see they can cap my Internet connection… but I haven’t filled any HDDs. So I just watch and wait…

For now I have enough free space both on the node and in the pool. I increase the size of the virtual disk and the node when it is close to running out of space. Expanding the pool may pose a problem, but there is enough space for now.

1 Like

I’m a little wary to expand right now, even though I don’t have enough nodes with free space to use all my IP’s. The main reason is that at the moment uncollected garbage and trash account for almost a third of my used capacity. Something still isn’t working right there. I will probably expand when all nodes are close to full and wait with expanding more until that issue is resolved. I’m also a little worried about what will happen with performance if expiration of the TTL data kicks in combined with large GC runs still needed. This is still an untested load scenario.

A side note: With the new node selection, it may not be all that useful to keep working with multiple IP’s as added load on the system reduces the chance node selection would pick my nodes. It still helps, but not as much. That’s a good thing in general. Though I wanted to keep those IP’s for collecting data for the earnings estimator for different sized and different ages of nodes. However, it’s near impossible now to make a good estimation as node performance can have a big impact. It’d be awesome if we could get average stats on ingress per node with free space.

9 Likes

Off-topic: but do I see some 1.105.4 sneaking out? Lets see if my setup can run filewalker and ISP-capping ingress at the same time :wink:

So it’s been a little over a month in testing, can we get any update if a client signed/is getting ready to sign, any kind of numbers (ie we are 30% there wrt requirements), or even what the reduction in throughput was after filling the 3000 nodes?

I’m not looking at a mountain of data, just a spec of dust will do.

2 Likes

3 posts were merged into an existing topic: Can’t split load on multiple nodes. Why?

First “Storj’s select”
now “Storj’s buffer” :smile:
Wouldn’t it be simpler to just $1.5/TB → $2/TB?

just sayin’!
You know, economics laws and stuff :roll_eyes: :smile:

3 Likes

Another good joke. Like the one asking for free hardware. Nice try :slight_smile:

1 Like

I don’t quite understand. The satellite knows the right amount of data stored per node. So why is it reporting a wrong stat to the node? Is there a difference?

I like the one where we add another ISP line in anticipation of maybe using it. Personal preference though, YMMV.

1 Like

You might have misread that somewhere. I explained why we are going to plan on adding surge capacity so that nobody has to take unessesary risks. That also covers adding another ISP line. With the surge capacity we are going to buy time so that the network can adopt to what ever the new situation will be.

Those are very strong words, I suggest “may be” is more appropriate.

how it will work? like temporary buffer? and then upload it to nodes? it will add speed to client uploads.

You pay peanuts you get potatoes. :slightly_smiling_face:

3 Likes

So you expect that we need to run the surge capacity more permanent? That wouldn’t be an issue. We can run the surge nodes for as long as needed. The costs are low enough for that.

The node selection will look like this. Lets say 95% of the pieces will get uploaded to the public network and 5% of the pieces to the surge nodes. Or what every distribution we need. The numbers don’t really matter here. All that matters is that we will mix them in with our public network in order to boost throughput.

There will be no rebalancing. The pieces that the surge nodes will store will stay there for 30 days. Instead we just reduce the number of new pieces they are getting. A complete shutdown is possible by setting the free space to 0 and just wait 30 days until they are empty.

Give us more potatoes. Thats fine. The benchmark tests are telling us that the potatoes still have decent throughput and will help. This is not a question of hardware. Even an Pi5 has a great time these days. There is no need for datacenter grade hardware. Run what ever you feel works best.

Common can we maybe stop this discussion? I undestand that you have to try to ask for more money. I would do this as well. At the same time I look at my node and the payout is already way higher than I thought possible this year. I can operate my node with that payout just fine. I understand some nodes might have too high costs for bandwidth. Thats ok. We can’t make this a net profit for all nodes. We only have to make it a profit for most nodes like for example my own storage node. Potato nodes are welcome.

I know the costs, I wasn’t suggesting you keep running it forever. I was suggesting that if one of the “big clients” has already signed up and “will be” is coming into effect, it’s better if the SNOs hear about it. If the client hasn’t signed up yet, that’s where “may be” is coming into effect.

Storj labs is reacting on what the numbers are telling it (ie projected sales, network growth, nodes being added/removed). Us SNOs are reacting on what crumbs of information we end up being fed. I think we have a right to be a bit more skeptical about any "will be"s.

2 Likes