I thought that to be a different public IP but in the same geographical area it would be useless according to this article:
Let’s talk hypothetically for a moment and say your entire file is stored in a single city. If the power goes out in that city, or a natural disaster strikes, your data will be lost.
Another hypothetical situation to think about: what if all of your data is stored in the same region? In this scenario, you could potentially lose access to your data in the event of an outage for any reason, whether it’s a utility outage, natural disaster, or state-sponsored “service interruption.”
With today’s v0.14.3 release, we’ve implemented a feature called IP filtering, which will ensure that no file pieces corresponding to the same file are stored in the same geographical area, based on logical subnets.
Taking this approach ensures the network (and the data stored on it) remains decentralized with a wide geographical distribution. On the previous network, nodes were selected for new data storage on a per-node basis. Selecting nodes based on logical subnets means having more or fewer nodes in the same location won’t cause more or less data to be stored. A single 40 TB node would receive the same amount of data as 10, 4 TB nodes on the same IP address.
If you’re storing data on the V3 network, or working on an integration, this means you’re much less likely to lose data. If you’re a storage node operator, this means that you won’t receive any more (or less) data if you’re running one, two, or 100 nodes from a single location.
From the practical point of view you can have more than a one node in the same /24 subnet of public IPs if:
you have a few empty drives but less than needed to build a RAID6/RAID10;
you do not have RAID6/RAID10 and do not want to build it or waste disks for redundancy;
your hardware is not so fast (for example - raspberry pi3).
In those cases the traffic will be distributed between your nodes. In summary they can receive only as a one node, so it would be a native kind of RAID , but in case of drive failure you will lost only that one node not everything, all other will still working.
You can read more there: RAID vs No RAID choice
Audits just check whether data is still there, they don’t test speed or anything. If you get a higher speed connection, you will immediately start winning more races to get pieces. Node selection currently doesn’t use any speed metrics to prefer faster nodes over others. Keep in mind that more speed doesn’t really help when you go over about 50-100mbit. At that point you’re already getting pretty much the maximum amount of traffic. And other than raw speed, the latency is also a factor. If your latency to the customer is high, then more speed is not going to help with that.
Ohhh boi!! this is really hardcore. I am stuking at this part as well. So my QNAP doesnt get identified and I am also unable to find the path. I am using “File Station” to upload my identity and it is as you guys are telling, I am following one path at the “File Station” and another one at the “QNAP storj node setup interface”. Wooouuuu…
Then I can only suggest to remove the QNAP app and use a CLI method instead: CLI Install - Node Operator
You already have docker (Container Station), statically mounted disk (QNAP do it automatically) and identify, so you can run the storagenode via ssh.