Put all the Hardware to the Work

You can have as many nodes as you want. They will be used as a one huge node. This is almost equivalent to a RAID. But unlike RAID you will spread the load and risks.
Regarding RAID you can read there:

And take a look on RAID vs No RAID choice as well.

In the v3 we do not have a replication anymore, only erasure codes are used. You need to have 29 pieces from 80 to reconstruct the file.
So, the replication logic is irrelevant anymore.

We want to be decentralized as much as possible, so we selects one node from /24 subnet of public IPs for each piece of the segment to make sure that the pieces of the same segment will not end in the same physical place or ISP. As result all your nodes behind the same /24 subnet of public IPs are working as a one node.

The problem with access and speed is solved differently - when the customer want to upload a file, its uplink requests 110 random nodes and starts uploads in parallel, as soon as first 80 are finished - all remained got canceled. The same for downloads - uplink requests 39 nodes, starts downloads in parallel, and cancel all remained when the first 29 are downloaded (the customer need only 29 from 80 to reconstruct the file). As result - files are stored on fastest and closest nodes to the customer’s location. However, because the node selection is random, they still spread across the globe: Visualizing Decentralized Data Distribution with the Linkshare Object Map

2 Likes