Switching Networks to existing subnet


I started 2 storj nodes around 2 weeks ago. One I set up VPN’d into a VPS in order to avoid the overlapping subnets, however it ran out of bandwidth so I’m planning on taking it off the VPN. This means that both nodes will now be sharing the same subnet.

I’m concerned that since they might have overlapping data due to previously being on different subnets, this could result in less egress usage since it now would only use one of the nodes.

I was curious if my understanding is correct, and if I should instead just restart the previously VPN’d node on my own subnet.

Not sure if this has any impact on validation or anything, in theory would this speed up the validation of the node since they were initially on separate subnets getting validated, then consolidated?

You have 2 unique nodes. They both have unique data. They will however share the ingress between them when on the same subnet as the satellites treat nodes on the same /24 as the same when in comes to ingress.

It might slow down the vetting process slightly depending on how much data each node has. Vetting depends on each satellite completing 100 audits, and audits are chosen at random. So the more data each node stores, the higher the chance of an audit.

Keep in mind that it is against the TOS to run a node behind a VPN. The VPN will also cause your node to lose more upload/download races due to the increased latency.

Typically, if they were both hosted on the same subnet initially they would have no duplicate data from my understanding. In this case since they were initially on separate subnets, isn’t it very possible that they have duplicate data on the nodes?

Also assuming they’re already vetted, they would remain vetted when changing IPs right? The latency is quite low, my ping time to the VPS that I was using was sub 2 ms.

They won’t need to go through vetting again, even with the ip change. If by duplicate data you mean 2 shards of the same segment, then yeah I suppose that it is possible both nodes might have shards belonging to the same segment.

Data stored by the network is erasure coded so the customer client only needs 29 shards of 80 total to reconstruct the original data. I don’t think this will have any meaningful effect on the egress. Someone, please tell me if I’m wrong.

I’m just worried that they have many shared shards, which would lead to egress decrease. If you’re right though then I think I’m fine. Thanks for the info.

Hello @ad329,
Welcome to the forum!

This limit here for a reason to do not have multiple pieces of the same segment in the same physical place to do not risk the customers data. What you did is damaging the network and altering the default Storj network behavior (which is forbidden by Supplier Terms & Conditions by the way). If your hardware or ISP would become offline, these pieces would be lost and the customer may end in the situation when the remaining pieces are not enough to reconstruct the segment.
The lost file = the lost customer = no payments = no payouts to anyone.
Please, stop doing so.

there is no shared shards. We do not use a replication.

Each piece is unique, however since you circumventing the secure limit by pretending being in two physical locations, your both nodes could have more than one piece of the same segment.
The egress depends on the customers, not your hardware or software options. Not all data is downloaded back by the customers, so egress is not predictable. However, if your node is close to the customer which wants to download their data back, your node may win races more often. Likely your VPN node has a greater latency than the other one and likely it can be slower for the customers because of that, so it should loose races more often.

1 Like

Running both nodes in the same subnet is fine. Port-forward their own different ports on your router and you’re done. Yes they’ll share ingress.

And although (as AussieNick mentioned) a VPN can add latency… the fact that you have a separate /24 IP means you’ll get way more data, and you’ll use your space much faster. Even if Alexey tells anyone using the letters V, P and N in a post to “Get off my lawn you young whippersnapper!:stuck_out_tongue_winking_eye:

1 Like

Sorry, if I sounds like this. I’m very concerned that we have too many VPN/proxy nodes, which tricks the node selector to get more data in the same physical place. It could be acceptable if that place is having a high availability configuration, where pieces cannot be offline longer than requirements and of course must not be lost.
However, usually those configurations even doesn’t have an UPS, not speaking about redundant hardware and connection lines.

Using VPN is ok, if you have a CGNAT, otherwise you wouldn’t able to participate in the network at all. But when you have a public IP, the VPN is usually used to trick the node selector. And the author used it exactly for that.

That’s a fair point, thanks for the info. Will switch to having both nodes on the same subnet.

1 Like

Thank you!
I would prefer that the protocol handled this properly, however, now it’s not the case.
So we forced to remind that breaking ToS is not ok, even if I can understand the intention to get more money out of the available hardware (I’m SNO too). But if this would destroy the network, what’s the point? I want to receive a reward for a long time, not only until we would lose the file because some Operators decided to use a VPN to circumvent the limit.
I still hope that we can implement something, which will make using of VPN obsolete in cases when you have a public IP.