Fair point, I have no idea how much either of those costs.
I think heâs saying that capacity-reservation can solve the raw-space issue⌠but the current limitation is throughput: existing nodes have free space but canât receive ingress fast enough?
So the request is maybe for us to have faster Internet plans (or non-SMR HDDs, or whatever can make writes slow). Like the request for âmore nodesâ is really âmore network connections we can balance ingress acrossâ (and more nodes, especially from new SNOs, could do that)?
I donât see that on my end. If I leave the nodes raw (ie on the internet line I have them on), Iâm nowhere near at saturating my connection. If I route them through an EU-central-datacenter, I instantly saturate it and keep it saturated.
That, to me means that nodes closer to the satellite are selected more often. There is no logical scenario in my head that matches the traffic pattern. I canât saturate my link, but routing half-way across the globe (+added vpn overhead) gets it saturated?
Capacity yes but we are concerned about throughput. Bringing online a 1 PB of storage on a 100MBit/s connection isnât going to help. I think we filled like 3000 nodes in the past week. So if these nodes would add some extra hardware drives (if possible with their setup) that would get us back to the throughput we had a week ago.
That is an old statement. We now see that the number of nodes with free space needs to stay high enough. So any additional node on a different internet connection or additional space on full nodes helps.
Not useful is to add an additional 1 PB to the 100MBit/s node I mentioned above.
I canât even predict for my own nodes how much used space I will have at the end. As explained a few posts earlier we are trying to reserve enough capacity in the network to answer that question. In the meantime we are preparing some alternative solutions to that problem. The surge capacity would be short term.
Ah, I get it. I can see they can cap my Internet connection⌠but I havenât filled any HDDs. So I just watch and waitâŚ
For now I have enough free space both on the node and in the pool. I increase the size of the virtual disk and the node when it is close to running out of space. Expanding the pool may pose a problem, but there is enough space for now.
Iâm a little wary to expand right now, even though I donât have enough nodes with free space to use all my IPâs. The main reason is that at the moment uncollected garbage and trash account for almost a third of my used capacity. Something still isnât working right there. I will probably expand when all nodes are close to full and wait with expanding more until that issue is resolved. Iâm also a little worried about what will happen with performance if expiration of the TTL data kicks in combined with large GC runs still needed. This is still an untested load scenario.
A side note: With the new node selection, it may not be all that useful to keep working with multiple IPâs as added load on the system reduces the chance node selection would pick my nodes. It still helps, but not as much. Thatâs a good thing in general. Though I wanted to keep those IPâs for collecting data for the earnings estimator for different sized and different ages of nodes. However, itâs near impossible now to make a good estimation as node performance can have a big impact. Itâd be awesome if we could get average stats on ingress per node with free space.
Off-topic: but do I see some 1.105.4 sneaking out? Lets see if my setup can run filewalker and ISP-capping ingress at the same time
So itâs been a little over a month in testing, can we get any update if a client signed/is getting ready to sign, any kind of numbers (ie we are 30% there wrt requirements), or even what the reduction in throughput was after filling the 3000 nodes?
Iâm not looking at a mountain of data, just a spec of dust will do.
3 posts were merged into an existing topic: Canât split load on multiple nodes. Why?
First âStorjâs selectâ
now âStorjâs bufferâ
Wouldnât it be simpler to just $1.5/TB â $2/TB?
just sayinâ!
You know, economics laws and stuff
Another good joke. Like the one asking for free hardware. Nice try
I donât quite understand. The satellite knows the right amount of data stored per node. So why is it reporting a wrong stat to the node? Is there a difference?
I like the one where we add another ISP line in anticipation of maybe using it. Personal preference though, YMMV.
You might have misread that somewhere. I explained why we are going to plan on adding surge capacity so that nobody has to take unessesary risks. That also covers adding another ISP line. With the surge capacity we are going to buy time so that the network can adopt to what ever the new situation will be.
Those are very strong words, I suggest âmay beâ is more appropriate.
how it will work? like temporary buffer? and then upload it to nodes? it will add speed to client uploads.
You pay peanuts you get potatoes.
So you expect that we need to run the surge capacity more permanent? That wouldnât be an issue. We can run the surge nodes for as long as needed. The costs are low enough for that.
The node selection will look like this. Lets say 95% of the pieces will get uploaded to the public network and 5% of the pieces to the surge nodes. Or what every distribution we need. The numbers donât really matter here. All that matters is that we will mix them in with our public network in order to boost throughput.
There will be no rebalancing. The pieces that the surge nodes will store will stay there for 30 days. Instead we just reduce the number of new pieces they are getting. A complete shutdown is possible by setting the free space to 0 and just wait 30 days until they are empty.
Give us more potatoes. Thats fine. The benchmark tests are telling us that the potatoes still have decent throughput and will help. This is not a question of hardware. Even an Pi5 has a great time these days. There is no need for datacenter grade hardware. Run what ever you feel works best.
Common can we maybe stop this discussion? I undestand that you have to try to ask for more money. I would do this as well. At the same time I look at my node and the payout is already way higher than I thought possible this year. I can operate my node with that payout just fine. I understand some nodes might have too high costs for bandwidth. Thats ok. We canât make this a net profit for all nodes. We only have to make it a profit for most nodes like for example my own storage node. Potato nodes are welcome.
I know the costs, I wasnât suggesting you keep running it forever. I was suggesting that if one of the âbig clientsâ has already signed up and âwill beâ is coming into effect, itâs better if the SNOs hear about it. If the client hasnât signed up yet, thatâs where âmay beâ is coming into effect.
Storj labs is reacting on what the numbers are telling it (ie projected sales, network growth, nodes being added/removed). Us SNOs are reacting on what crumbs of information we end up being fed. I think we have a right to be a bit more skeptical about any "will be"s.