IP Filtering, Deep Dive

what if the isp doesn’t use subnet/24 they use different subnet scheme

This is what Storj said would not happen - multiple nodes would get aggregated and get the same amount of data as a single node.
The reason was given that this would prevent large datacenters from centralizing the data and would avoid the hundreds of nodes everyone ran in v2 (because that did result in more traffic).

If this is not so, I might as well start a few more nodes on the free space in my array.

There is no way to know that from outside the ISP, but it’s a reeasonable assumption.

@Pentium100 I believe that happened due to 1.50 usd subsidies

My reason was, because if HDD or DB failure, i loose only 1 HDD node. And i not need to mess with raid

1 Like

Well, if the amount of data and traffic was the same I’d rather have one server instead of lots of regular PCs with a single drive because that would increase the amount of stuff I have to do to take care of them.

However, if multiple nodes get more traffic, then I might as well spin up more VMs in my server.

I have 3 Pc, with windows 3-6 node per PC gui version no VM
Node takes only 20-50 MB of RAM, VM takes 4-8 GB of ram each?

VM lets me use the physical server for something else as well. Still’ let’s wait for an official answer about this.

Today Storj Team is transparent here and trobleshooting real problems only, looks like they make update relese soon?

The thousands of nearly empty nodes did. But I ran 4 nodes on V2 because my hardware could handle it and it got me 4x as much traffic. If the system doesn’t prevent you from getting an advantage that way, people will use it. And I started my V2 nodes when the end of the $1.50 incentive was already announced, so that was not the reason for me.

No need, just more containers would do.

Yeah, nobody is saying you’re doing anything wrong. You’re following the recommendations. You’re just seeing results we wouldn’t expect.

As i written on other thread, on some point i tried maximize profit, and used Proxies for nodes, about a month long, so it is posible that i have more data at that time. But i ran to big minus in profit as avery proxy pays money. and i ended experiment.

As @Vadim said

And this is the reason. He gamed the system by proxying the different IPs to his nodes.
But it’s expensive, as he agreed.

@Pentium100 you can try to run more nodes, but I don’t know, with what you will compare to make sure that your multiple nodes behind the same subnet /24 of public IPs got more traffic than when you have the only one node for the same time.

1 Like

@Alexey
theoreticaly each file upload need 110 nodes or more if there is more than 64MB. if i have more nodes, i have bigger posibilitie, that one of my node will be in this count? or it handled some how also?

I can confirm that @Alexey must be correct that this was just because of the proxies used. I had a little dig through the code to find the actual code that’s responsible for selecting nodes.

You can find it here:

Bottom line; the code first picks one random node per /24 for all nodes with a known IP, and then picks as many nodes as needed for the transfer from that list. The result is no doubt that every /24 subnet will be picked at the same rate. I guess we got all excited over nothing. :wink:

3 Likes

Doesn’t matter. If the ISP has a /23 subnet, then it looks to Storj like two /24 subnets. Storj just looks at the first 3 octets if the IP.

Great, now we know that it works as intended and the uneven traffic was due to proxy.

I was thinking - is there a concurrent connection limit for a node during download? We already established that multiple nodes in one /24 are considered as one node when selecting where to store a piece.

How are nodes selected for download? 80 nodes store the pieces, but the uplink does not connect to all of them

The Uplink client requests the pieces held by the 35 nodes, but stops after receiving 29 pieces from the fastest 29 nodes.
Reputation Matters When it Comes to Storage Nodes

How are the 35 nodes selected? Does some kind of load balance apply here (node A has served 50 requests in the last minute, so node B with the same piece will be on the list instead) or are they selected completely randomly out of the list of all nodes with the piece?

looking for info, I have node behind the same public ip (73.138.168.xx) and are full of data one of those nodes is located approximately 100 feet in a building across from my connected to via Ethernet to the gateway, the other full node is connected to to my home computer via wifi using a wifi repeater on a Mac running docker both with the same public IP address. so I have spun up a third node on my on isp with static ip(173.9.183.xxx) different from from the other node and and also using docker on the same Mac as the other docker node, 1) is the third node still consider as a single node and if it’s not why and if it’s considered a separate node why can’t I use -p 28967:28967 on this different ip node.

All nodes in this subnet: IP Calculator / IP Subnetting are filtered
All nodes in this subnet: IP Calculator / IP Subnetting are filtered

You should change the external port (left part) to be able to forward it to that host. Each host must have all ports unique. So, if you have one node on that host listening on 28967 port, then the next one can’t use the same port on the same host.

even if the node is on ip 173.9.183.xxx/29 and is the only node on this ip it will still be filtered and not be abled to use the Standard -p 28967:28967

Those are independent events.
Filtered meaning that only one node from this subnet will be selected for the segment.
It has nothing to do with -p 28967:28967

The host system can’t share the same port this is limitation of the TCP stack.