Apologies in advance if the title is confusing, I’m not sure if that’s the best way to phrase my question…
I have some servers running at a datacenter (Leaseweb New Jersey if it helps). They all have dedicated IPv4s in the same IP block. Since the storj algorithm favors geographical diversity, I assume each storj node I were to add would be competing against each other to some degree.
In your opinion, how many storj node can I run before the increase of the overall utilization is no longer linear (Let’s say if running only 1 node it fills its hard drive in 6months, I want to stop when each node fills its hard drive in 9 months)? Are there any experiments I can do to find out?
The servers don’t have spare storage space at the moment but spare hard drive bays. I’ll need to buy some second hand drives, so I can’t just do it and see what happens.
One tool you can checkout to estimate how long it’s gonna take to fill up space is this community-made estimator:
All nodes behind a /24 subnet are all considered as one. Which means that if you’ve got 1 node behind IP a.b.c.d getting 4GB/day (e.g.), adding 1 another node as a.b.c.(d+1) would split the traffic between the two (that is 2GB/day each). Overall, you wouldn’t get anymore data.
On the other hand, if you’re able to run each node on a different /24 subnet, then you’re very lucky and each one would get full ingress traffic.
I reckon adding more nodes in different /24 subnets wouldn’t really impact your other nodes traffic wise, unless you add like hundreds of nodes…