SNO Capacity Planning

I already created several additional nodes, but so far my nodes are not completely full and somehow I’m not exceeding 150mbit download throughput (ingress).
I have the drives for an additional 250tb capacity, so keep the data flowing

3 Likes

Yes kind of. It should be possible to measure the grow rate of a single day and than estimate how much that would be after 30 days. I wouldn’t call it maximum size. Your node will continue growing just a lot slower. There still is other customer data with no TTL getting uploaded. We just don’t notice it because of the high inflow of TTL data.

There is also the possiblity that your node will shrink over time. Imagine you end up with like 100 TB used space because you are one of the few nodes that has this amount of space. Now the number of nodes doubles. That should cut your portion of the TTL data.

And last but not least more customers to come. We might be at a tipping point. At first we have been happy about a few customers in the 100TB range. If they are happy with the service they will recommend us to some other customers and we can slowly gets bigger and bigger customers. At some point I would expect rapid growth. This should start to snowball. Lets hope for the best :slight_smile:

The TTL data doesn’t go into trash. It will get deleted directly and therefor replaced with new uploads. So if you would connect a 4TB storage node it will fill that 4TB very quickly and than keep it full bejond the 30 day mark. Any piece that expires will get replaced with a new one in a short time.

7 Likes

Not in ext4 fs. You can use 4k block size with ext4. I already use it for 22TB drives.
Maybe you should go with linux in next setups.

5 Likes

I just hope node growth keeps up. I don’t see much hype around that outside of this forum. And the network could use a little more scale on that end. Existing SNOs expanding helps, but adding more SNOs would be much better.

2 Likes

Expand based on my bandwidth and point of balance with deleting data.
Point of balance
1gbit → 324tb
2gbit → 648tb

and so on…
With 30 days TTL of course. If this data will be not stable I will be less aggressive :slight_smile:

PS: you’re making a lot of hype. I hope it doesn’t lead to anyone getting hurt

1 Like

My only concern are the power costs. For now I don’t mind a few extra Watts (it’s a nice hobby and a nice distraction). But when I’m gonna spin up 24 drives (if necessary even more), it takes a bit more than a few Watts.
But when I can offset this with a nice monthly income, then I’m all in.
But I think it’s gonna take a while before those drives are necessary and completely full

3 Likes

:heart_eyes: :heart_eyes: :heart_eyes: :heart_eyes: :heart_eyes:
Respect!

When AirVPN caps the max speed for more than 1TB transfer /month we will see some improvements.

1 Like

I think for me it depends on how much egress there is. I can certainly add more drives to the server and I can replace smaller drives with bigger ones (or even replace the motherboard with an ewer one that has more RAM slots and support or faster CPUs).
However, it would depend on how much I would get paid for it. The rates are fixed, so it depends on whether there is some egress and if there is none, whether the uploaded data stays on my node for a long time or not.

Best case scenario of course would be data with lots of egress, worst case is what the current tests simulate (lots of traffic, lots of load, but the data gets deleted rather quickly).

OTOH, I still have some space on my server and could expand the virtual disk more if it runs out of space.

If I’m correct, there are currently±22.000 storage nodes and a little bit more than 10.000 are currently full. What I dont know what percentage of ip4 subnetten this represents.

What did you calculate for, say, 200Mbps? Is it a linear relationship?

Asking for a friend :wink:

2gbit / 10 = 64.8tb :slight_smile:
No one bytes more… no less…

1 Like

Well, seems like it’m not too far from there already…

As a customer to Storj, I can say that my colleges and I (in IT department) are very happy about using Storj to store our Veeam backup archives. :handshake:

9 Likes

It is only true if All of youd data has TTL, i think most of our data is without TTL

Au contraire
The rationale for the tests and the huge size of piece.expiration dbs suggests a huge amount of TTL data.

I had 200TB of data before TTL test data started, and now I am on 300 TB

I try to get friends interested. But when I explain the vetting period… and the holdback amounts… and the holdback duration… and that they may not even know the system is working because it may take months to hit the minimum payout if ETH fees are high… and their preferred exchange may not support STORJ… and that normally 10TB would take 2+ years to fill… …they decide they don’t want to even try it.

If we have 1000 or so interested SNOs… I’d assume most expansion would come from those core believers. If Storj can’t get some coins in the hands of new users in guaranteed max 60 days… it’s going to be hard to attract fresh installs.

But I’m fine with that. More for me :wink:

Yes, but you already had a mammoth amount of data so the TTL data from the last few weeks would have been diluted on your pool.
Most of us mere mortals likely have a higher proportion of recently uploaded TTL test data :slight_smile:

But this will hopefully change dramatically, won’t it? :wink:

2 Likes