SNO Capacity Planning

If all allocated so far space fills and stays filled I will add disks. They are dirt cheap now, under $10/TB

4 Likes

Not in the UK, they’re not… :roll_eyes:

2 Likes

Yes but there to much issues with garbage collector you will need to fix. Also maybe increase nodes per subnet to 3?

1 Like

If new SNOs would come for a quick buck, they would leave as soon as anything starts to be a bit more complicated, or the first huge data deletion, or at any negative sign. I believe the slow rewarding system (at least until now) is a verry good filter for not dedicated node operators.
The ones that joined untill now and stayed with all the ups and downs are the backbone of the network. The new ones should fit the same profile. Join for long term and expect anything, but be optimistic and trust the team and the project. The rewards will come.

6 Likes

If the economics and time to fill HDDs remain consistent, I will be expanding in US data centers via colo only. The ingress at current rates make that economically viable for me.

I have that going on now on a small scale and the drives are filling fast, but this approach is very capital intensive and ROI in a reasonable time is very dependent on big sustained inbound flows from StorJ customers (or this test mode going on now). I see that is the only real way to scale economically though. The key is to find a colo facility that allows for unlimited ingress. Many have some sort of limits. The one I"m using now has no inbound limits.

Assuming the current test traffic is representative of future, I won’t need to. My nodes don’t grow anymore.

I kind of assume now that Storj is now at the stage I was predicting, where hobbyist node operators slowly stop mattering.

Maybe. I don’t know if my ISP will handle tripled traffic. If they do, then probably yes, with the caveat I stated in the test data thread.

An alternative way for me would be to finally set up nodes at a local collocation facility, which at tripled traffic might make sense to me… would need to evaluate this plan. Not a priority for me though yet, so I’ll probably only do that after I see this kind of traffic consistently.

Cluster size seems to only be a problem for NTFS.

1 Like

You described Select three years early! But you may underestimate the scale ‘hobbists’ can still run at: like 24-bay enclosures are pretty common (and cheap on the used market).

1 Like

A 24-bay enclosure is still nothing compared to what any decent hosting company can spin up. The hundreds of thousands of disks Backblaze shares statistics on should be a representative number here, and there are many commercial actors on the market between your hobbyist 24-bay NAS and Backblaze who will jump at an opportunity when it finally becomes viable to spend human time on—with a better latency to customers than an average hobbyist.

Of my 5 nodes, two 4TB drive nodes have been set to no ingress so I can upgrade the disks when my other 3 nodes become full (one 10 TB, and two 12 TB disk which current are ~ 50% full and growing ~300 GBs/day). When they are full I will migrate the two 4TB drives to 12 or 16TB drives. By the time those are full I will understand how to add more new nodes (via Windows UI tool box) to my 2 existing Windows PCs which have open bays. I can also add a drive to my Ubuntu PC but it is low powered with limited expansion capabilities. Looking forward to a bigger setup like an 8 bay NAS dedicate to Storj.

3 Likes

You’re essentially asking for at least twice the risk, why don’t you dip into the war chest for 6 months, and offer double or triple the reward? The community would respond likewise, I’m absolutely sure. While not the triple digit ROI returns of the project’s genesis, substantive enough to re-invest for serious SNOs; even knowing that it was say just a six month promise, and thereafter reduced rates shall apply.

I (we) could be a somewhat substantive SNO, having arguably Petabytes available in such a case, with the existing setup we have available. This last quarters current annual comparative analysis (~ when new rates came into effect) came in under 30% ROI for this project allocation, ergo I suspect we’d be looky-loo, believe it when we see it - storage nazis! Cumulatively and hesitantly adding, as seems to be the gist of responses here within this thread. So consider upping investment per TB, to ensure your own and our exponential growth - as simply what it takes. Then I have an argument to quadruple up on space allocation, considering SLC data vetting removal, 3G bps line space per geographically stretched cluster, and 4 clusters of ~640TB currently. Take that to your board table, as a real feedback argument, please.

Regards,
Julio

1 Like

Oh Roxor, how’d I know you’d be the first to respond?

3 Likes

You caught me just before bed! :wink:

I see people promising more space… but it needs to come with more speed too doesn’t it? (That’s the reason Storj is spinning up their own burst servers?). For what they need… spending on burst hardware offers a better return than bumping payouts for a few months.

Actually they already are artificially bumping payouts, with reserved-capacity, aren’t they?

3 Likes

Absolutely they are, but there’s 0 commitment. I’m suggesting they commit substantively (well for 6 months), and the efficiency of the existing network absorbs what’s reality without their trouble of spinning up their own instances, as their resources could be better used elsewhere. They could erase/delete all that TTL data within a week, where are we left?

Julio

In less than a week of performance testing… they filled 3000 nodes… and performance dropped. And now even though we’re receiving a firehose of paid capacity-reservation data… most SNOs are non-commital - still waiting for different paid data… from confirmed new/large customers. Give me a break.

The “existing network” has failed them.

They need those burst servers to deliver consistently high performance while the existing network gets their thumbs-out-of-their-butts :stuck_out_tongue_winking_eye:

I hear you, sensible points…

Nonetheless, I suggest they walk the walk before they talk the talk! lol :stuck_out_tongue: have only SLC reservation data tripled up for pricing, then they’re paying to ensure reserve is there, they have the data of response as the truth of effectiveness to show their clients. It would be incumbent upon us to crank b/w at that point as well, wouldn’t it?

Julio
P.S. Go to bed :stuck_out_tongue:

1 Like

Currently no because there is no basis for such a planning as almost every number that the nodes provide is not reliable or just wrong. And unfortunately there is only slow progress in fixing this as whenever this is getting mentioned we hear it is “low priority”.
Due to various bugs several of my nodes show wrong bandwidth, wrong used space, wrong trash, wrong average used space, wrong estimates, wrong everything…
On what basis do you want me to answer your question when I even don’t know how much space is occupied and used or is going to be trashed on a node.
So I guess a planning to expand are halted until the numbers are reliable again.

I see there are possible fixes in version 1.105 and 1.106 but again it seem low priority for Storj to get them out as there is no progress in rolling them out. My understanding for all of this is going close to zero.

And then there is the issues with countless filewalkers and databases that cause one problem after the other…

Edit: We have saying here to “shoot yourself in the foot.” In my case this is what Storj doing with all those wrong numbers und unreliable data from the nodes and not fixing them or not releasing the fixes in a timely manner.

3 Likes

Oh and by the way, as far as reserves of cash go, ask the COO (or invewstment custodian loan, wtfever happened ther) who continuously liquidated a tone of tokens over last summer/fall below ICO rates, lets see them do this - and also have the balls to reverse the next net token flow report; to cover their ass, from that paper loss debacle, by covering during this recent dip.

Juio

Maybe to solve the issues on the node side first before asking customers to do that? I mean the ongoing problems can easily backfire when the nodes can not or will not keep up with such a data flow…

4 Likes

A post was merged into an existing topic: Re-implement RAID0/MergeFS in storagenode

https://storjnet.info
It’s a third-party site, but they doing a great job!