Farming preparation

Since my test nodes are running well. I’m thinking to prepare 1 pb harddisks to join, could I have any suggestions from your guys?

Hold your horse, friends. I doubt you could get pass 20TB on a single ip address, ever.

7 Likes

You mean it’s impossible to fill a node bigger than 20Tb?

Yes, currently it is, StorJ want to distributed data among node operators, so data per node is thin (specifically - look up for /24 subnet rule).

Or you could look at this, Understanding the Commercial Storage Node Operator Program - Storj Docs, though the requirement bar is too high for a homelab.

How do you want to fill 1 PB:

Screenshot_2024-10-11_055238

https://storjstats.info/d/storj/storj-network-statistics?orgId=1

This is not mining. It depends on real customer data and Storjs ability to convince customers to actually use their distributed cloud storage and pay for it.

3 Likes

Yes i noticed this, but i believe data storage market is fast glowing

Yes, I guess everybody hopes for that and of course you are free to set up as much storage space as you want. It is just that it probably won’t be filled soon and if it gets filled at all then rather later than sooner.

The fill rates we saw recently with all the test data are not the normal rates what to expect. Even more if testing is done on the public network and the customer decides to upload their real petabytes of data onto the Select network as it happened recently:

More profit will also bring more people and data spread even thinner, don’t bet your farm on StorJ yet, let start with 10TB first and expand it later…

2 Likes

10Tb is not profitable at all, maybe I’ll reduce the scale to 300Tb in the first phase

How can 300TB be profitable if not filled? But it is your choice.
I don’t know your setup but keep in mind that generally more free space does not mean you will receive more data uploaded.

Instead of setting up large free space upfront a general suggestion is to start with one node on one disk and scale as the node fills up. One by one. If one disk is full, start the next node and so on.

5 Likes

Just start one node for every different /24 subnet IP you have/can access.
And start with the biggest drive you can get at a good price. The price/TB has always beed in favour of Seagate Exos drives. The 24TB ones are the latest. Check the prices for 20, 22 and 24TB Exos drives and go with one of them.
Don’t start more than 2 drives (just to share the load) on the same /24 subnet, because it will take years to fill them, and they can die before that time comes. Expected life of a drive is between 5-10 years. It’s not a rule set in stone, but you should calculate your ROI based on 5 years drive lifetime.
My newest 2 nodes on the same machine, 6 and 7 months old, store 8TB in total.
My oldest 2 nodes machine stores 15TB in total. It’s been runing from 1.2021. :smiley:
So, to fill even 100 TB on the same IP it will never happen until the drive dies.
And if you think you can move 20TB of data out of a dieing drive, think again. It takes like 1-2 days per TB with the node stopped.

Hello
I recently started my 3rd node in Synology docker. I only have one internet subscription, so it will never be full on the same /24 network? Does it require a new IPS subscription or VPN?

I don’t know, i run all my test nodes via vps

You cannot use VPS to circumvent the /24 restriction. Read ToS and search forum for details. What you both are discussing has been covered already.

You seem to think running nodes is supposed to be “profitable”. The goal is different. And it’s in no way farming, it actually has utility and purpose.

I would also recommend reading the whitepaper to understand the project before deciding to do anything,.

3 Likes

I do, so my every node can have different IPs

I’ll repeat again:

What you do is against terms of service. You need to stop.

Just put 2+2 together please. Why do you think is the restriction in place? For you to have fun circumventing it? Just think. Please.

2 Likes

Use VPS or VPN have mo problem, what is the operator need to do is provide reliable service.
Since i have IDC experience, i know how to deploy, setup and maintain scaled server network.
Why i believe storj have a good future, because i use to maintain Pbs data on both local zfs pool and S3 buckets before.
Data is future bro, i know what I’m doing

VPNs aside, current real node growth is ~50GB/day per /24 so you’re gonna need a lot of subnets to fill even 300TB in a reasonable time frame. Good luck with this.

If you’ve got a big deployment you may be interested in Storj Select if you have the proper certifications.

1 Like

Yes but my bottleneck is connection speed, I’m living in a isolated area, still testing how to reduce the latency .

In that case, please listen others, start small or do not start at all.
It can be profitable, but not from scaling of the local setup. Instead, you need several independent as much as possible locations to run nodes. Or join the Storj Select, if you eligible.

1 Like