Multiple nodes behind same IP

I have 6 nodes behind my single, residental IP address. All of the nodes have their dedicated HDDs.
I was wondering if it matters from traffic (ingress/egress) point of view if only one node has available storage space for ingress or all of them have?
All of them are fully vetted.
I’m curious to understand how the satellites guide the traffic in such case. :slight_smile:

When the customer requests nodes to upload their files, the satellite selects one not full node from all available subnets for each of 80 (110) pieces of the segment, in 5% cases it adds unvetted node (unique by the subnet too) to the list.
So, if all your nodes but one in the same /24 subnet of public IPs are full, this one node will be selected sooner or later.

So, becouse of the selection is made by the /24 subnet, it does not make any diff if I have 1 or 6 node with available space on the same subnet?
From an other perspective: It is like a lottery and all the nodes have a ticket, or just the /24 subnet has a ticket (and then it is a lottery again among the nodes within the given /24 subnet)?

I’m alone in the /24 subnet, please consider my question based on this.

all node behind a /24 is treated as a single node.
So you are not getting anymore traffic by having more nodes behind a single /24.

but wining rate can be overall slightly better, as several nodes are less loaded by big ingress traffic now, and can write and response faster than only one node in /24

2 Likes

Practically if out of 6 nodes under the same /24 subnet only one has available space and the other 5 are full, when a new piece arrives it will be loaded only on the node with available space. If, on the other hand, a user fetches data from your nodes, it depends on where that data is stored and the node that contains it is queried.

You must also consider that the nodes are never full forever, there are cancellations by user users and therefore the 6 nodes could still download data and therefore the one mentioned above is no longer valid because if there is space available on several nodes in the same /24 subnet, the data upload to the node is done on the basis of a “lottery” (as you call it) in the sense that a piece is uploaded to a node but the node is chosen for speed, latency, availability in that given millisecond and disposition in the geographical area of the starting data, this last condition is to guarantee low latencies and therefore short routing routes and therefore consequently guarantee speed and immediate availability to the end users.

I hope I was clear and concise but the whole system is much more complex than I described it to you in brief. I was very exemplary.

2 Likes

Thx, this is clear now.

Let me open up my question a bit more with an example. This is my own case, so exact details.

I have 2 nodes running on a RPI, these are with a 2TB and a 4TB drive. Both drives are full. I’m not a linux guru, so I’m happy and don’t want to touch these…

I have 4 nodes, running on an HP Microserver N40L with Windows. This used to be my home server, but I upgraded it for a better, so gradually I started to use it for Storj.
The N40L has 4 drive bay + an external HDD with these nodes:
node3 - 4TB, 1+ year old, full
node4 - 4TB, 6month old, 1,5TB used
node5 - 4TB, 6month old, 1,2TB used
node6 - 500GB on an external USB drive, 3month old, full
bay 4 is now with a 2TB drive, empty

When I started node6, my plan was to start it small, keep it small as long as it will be at least 7 month old, so I can drive the ingress to the other nodes, which are more profitable for me, as they are older and the % withheld is lower.

I got a 10TB drive recently, so I was thinking to move node3 to this drive, so it can grow further. At the same time, I move node6 to the 4TB which I got from node3. The 2TB drive go to the shelf, all nodes will be on internal drives.
The copy process is now finished, so the current situation is like this:
node3 - 10TB, 1+ year old, 4TB used
node4 - 4TB, 6month old, 1,5TB used
node5 - 4TB, 6month old, 1,2TB used
node6 - 4TB, 3month old, 500GB used

As I see, I have around 65-70GB/day total ingress now. If I limit node4-5-6 at it current size, all ingress will go to node3. If I don’t limit them, each nodes get 16-18GB/day.
It seems to me more beneficial to limit the younger nodes and as soon as these will not have any withheld%, I can open them up to their full capacity.

As a side note: for all these nodes I used to receive around 25-30USD monthly payment and I don’t mind electricity as the whole house is running on solar.

Hi, these “tricks”, allow me to joke, can also be done to manage the flow towards nodes rather than others. But since you’re under only one ip I don’t think it’s so beneficial. As @Vadim pointed out to you it’s better to have multi nodes that handle multiple egress and ingress at the same time. The only thing you could do to improve the situation and have more revenue is to have more IPs assigned by your internet provider on different /24 hours. But this of course is an “illicit” practice if you see Storj’s rulebook but a common practice here among SNOs.

I dont know about that, long time ago i asked about that @jtolio in one of the first AMA or QA sessions here, and he answered that we can have as much external ip as we want, and it is Storj will handle it if it will be a problem, I thin it was more than a year or 2 ago.