MAX allocated space per IP

Hello,

Cannot find information is there a limit of maximum allocated space per IP address?

I only found this very confusing line. It says 8TB (what does it mean 8TB?), later it says maximum 24TB per node, and no maximum space per node :slight_smile: So is it 8TB MAX per node? or is it 24TB max? Or there is no maximum limit per node?

From storj.io website:
“8 TB and a maximum of 24 TB of available space per node Minimum of 550 GB with no maximum of available space per node”

It is confusing when talking about nodes, but about IP there are no information at all.

T.Y.

1 Like

Hiya @node1

As your node grows larger and larger, the more data it will lose per day, because it is much more likely to contain old data that customers delete because they don’t need it anymore.

At ~20TB, most people find that the amount of daily ingress matches the amount of daily egress, and as such the node will not grow anymore.

There is no technical limit on the node size, but there are many practical.
I personally limit all my nodes to 10TB (because it’s a nice round number), and create additional nodes when I reach that number.

Thank you. But my question is about TB per IP.

How much TB can be stored under the same IP address? Is there any limit for that or no?

p.s. Where did you found information that old data is kept on all nodes? If customer deleted some data, i believe it is deleted from all nodes as well. There is no reason to keep forever deleted data.

As far as I am aware there is no limit per IP. The system is setup to distribute data to many separate subnets. This helps to ensure redundancy even if one location / IP range has a lot of data.

There is an practical limit where nodes get data the same amount it is deleted.
Also the payout is cut at some point by the upstream of the connection.

Are you about to start a new node?

don’t forget its not recommended to buy anything for that, also the most earning-estimators are outdated, and future behavement of the network is not predictable.

2 Likes

Hi, if you mean how much data I can allocate for a single IP, hypothetically it is unlimited. But as other users have already suggested, the larger the node, the more data is deleted. Given that as storj rules each IP receives the same amount of data, it means that if there are 3 or more nodes under the same subnet, the data flowing in the same subnet is the same as if there was only one node on a single ip. As per storj directives, all nodes under the same subnet are considered as a single node, so the limit actually isn’t there but practically it is because if you had 2 or more nodes under the same IP which is in a subnet where there are other nodes, the data input would be so low that filling and storing data would not be advantageous for both you and the network, given that storj uses the data decentralization policy. I hope I have been clear but consider that there are hundreds of posts on this topic that talk about it.

I’m not sure if customer data is removed at the same speed that data is added under normal circumstances. We have had a lot of test data removed from the network in recent months and so many nodes have seen significant declines in their overall storage and the data coming in doesn’t always balance it out. This was a one time purging of this test data and it skews the amount of data we see getting removed.

Not all data is getting erased by customers, much of it is relatively permanent. As Storj continues to onboard customers we should see data become more expensive. However, that can be less apparent to node operators if new nodes continue to come online and thin out the data for each existing node.

My point is just that much of this is very dynamic and it is difficult to say with any assurance that reaching some high water mark will prevent you from getting new data over data being deleted. In truth, it really all depends on a number of behaviors that are difficult to measure and impossible to predict.

Effectively, the question is exactly the same as “how much can you store per-node” because of the IP filtering (technically /24 subnet filtering, but same thing for most circumstances). If one node “caps out” at 20TB, then putting two nodes on that same IP will effectively divide that 20TB amongst the two nodes.

To put it differently:

Deletion rate is, on average, proportional to how much data you have stored. Ingress rate is out of your control. Splitting across two nodes means your total deletion rate is the same for multiple nodes on a single IP compared to a single node (e.g. if a 20TB node would lose 100GB/day, then a 10TB+6TB+4TB set of nodes would lose 50GB+30GB+20GB per day), and total ingress rate will be the same because the filter means that multiple nodes on the same IP receive the same ingress as one node with a unique IP.

The last known equilibrium point between ingress and deletions for the /24 subnet of public IPs:

However we have had a lot of changes since then: test satellites have been shutdown, prices have been cut, new customers onboarded. So numbers likely changed. You may check the Realistic earnings estimator to get an idea.

What you referring to is a recommendation for a new node, it’s not a technical limit.

1 Like

Thank you for your replies.
So looks like people who have a lot of disks in JBOD each 16TB on 1IP, a little bit wasting electricity? As all nodes filling up at the same speed as 1 node?

You mean ahci mode?
Jbod will just make all nodes fail if one drive fails.
Normal way:
You fill one node. Let it be good as it is…

Or:
Get an other drive somehow and start 2. Node.
Ingress goes to 2. Node (minus deletions on 1. Node.who are filled up)

Note, 2 or more nodes on the same ip, filling is not fun. Wasting elektricity.

JBOD just means individual disks, no RAID. “AHCI” is a standard for SATA controllers, which would typically operate in JBOD mode.

No. Its means just a bunch of disks. Literaly.
No fault tolerance, and they are presented to the os as one drive.

No…jbod is software or raid controler funktion.
Normal sata Controller does not jbod.

3 Likes

JBOD can mean independent drives, or (and I really wish manufacturers would stop using the term “JBOD” for this when other terms are more descriptive) span/concatenation volumes. The other poster is referring to independent drives. You’re right that span/concat is pointless for Storj, but I don’t think anyone was suggesting that in the first place.

2 Likes

Depends on how they setup this JBOD it could be also waste of time.
As mentioned here, if that’s spanned volume, then with the one disk failure the whole node is lost, include spent time to fill them.
If they are separate disks, then running several nodes in parallel makes sense when the previous one almost full, then the traffic will go only to the new node when the previous one will be full. But if you run them in parallel when they are new - all traffic will be split between them and you will have a bunch a little bit used disks, wasting electricity.

1 Like