An empirical latency-based geoseparation idea

Just had some idea and wanting to share while I remember it. Comments welcome!

Right now storage of erasure shares is distributed by means of /24 IP blocks. We know this can be worked around by node operators to some degree by, let say, using multiple ISPs or setting up VPNs. However, this method has a crucial feature of being extremely simple and fast when selecting nodes.

I’m now imagining a network of, let say, 40 «measurement nodes» controlled by a satellite owner, and located in various parts of the world, preferably so that each node is in a different data center, different city, different geographic area. Each of them tests all nodes periodically and measures response latency (maybe as a side effect of audits, so these wouldn’t be actually new queries?). For each storage node we find the measurement node with the lowest latency and classify it as belonging to the geographic area of the measurement node.

Then, at storage node selection, nodes are selected so that no geographic area is selected more than X times, for some tunable X. The area would be identified by a single small number, one byte, so selection could again be fast.

It would be much more difficult to cheat, as it’s hard to have a single location have low latency to more than a small number of data centers. VPNs only increase latency, and multiple ISPs will likely connect to similarly-located backbone nodes anyway.

As for feasibility of running so many measurement nodes, they could be some smallest VPSes at, let say, Azure, which is already present in 54 regions. Alternatively I suspect that it should be possible to run a smaller number of measurement nodes and consider a geographic area as area defined by two lowest latencies. E.g. an area with the two lowest latencies to Paris and Frankfurt is likely different than an area with low latencies to Paris and Madrid.

While I don’t think the current approach is so bad that it must be changed, the above idea would probably allow for more fair distribution of data and take into consideration the fact that tech hubs are likely to amass large number of IP blocks.


I think this is an interesting idea which might give better incentives to operators.

Storj developers aren’t set on any new approach yet but we do recognize that the node selection algorithm possible contains some gains to be found in terms of resilience and fairness.


I definitely like this idea or something very similar to it. Though with some implementations through vpn or vps routing it may not be a perfect solution yet as people could still use that to look like they are spread out. But it’s probably a lot better than IP location alone.

It would also incentivize more node operators to pop up in geographically less crowded areas. Though it would kind of suck for me personally as the Netherlands is one of the most node saturated places on the planet.

1 Like

Yeah, it’s a lot more difficult to fake being close to something in terms of latency.

Tech hubs are likely to also have a lot of customers wanting to store data. With this idea it would be indeed more difficult for nodes in tech hubs to be selected as upload candidates, but on the other side, when they are selected, they’d be more likely to win races against nodes further away from these local customers.

Absolutely :slight_smile: I like the idea, honestly anything has to be better than current method.

It would be a scalability concern, from the list of nodes given to upload - the slow tale stop thing isn’t scaling well - As a customer, I would want to think that I was only ever given the top performing nodes - it’s very easy for bad nodes, to slow down and stall the network (its very random, but it’s annoying)

I really think the long term drive will need to have nodes ranked on many things, latency is important but can be faked easily, but if combined with geo-location, availability, performance over 30 days then maybe for S3 gatway storj can work towards a gold level tier - the other slower nodes will still be used, but maybe more for libuplink or slower type glacial storage where performance not important.

Storj could then awards prizes, for best online node, largest node… smallest node, with largest Egress - makes it fun :slight_smile:

Sorry, I believe you have misunderstood the context of the post. It’s not about ranking nodes better/worse, it’s about ensuring data is stored on nodes that won’t fail together.

1 Like

Yep I understand that :slight_smile: I’m afraid you’ve not understood the angle I’m coming from, Storj will understand.

It is important to ensure the node characteristics are understood - geolocation and latency would not be enough to establish “a node that won’t fail” is all I was saying, and that other characteristics need to be considered.

With regard latency, without posting something that would give too much detail - it is very easy using technologies on AWS to implement a Storj reverse proxy gateway, that can auth on identity information and DRPC calls, and pass through requests for blocks to a node over tunnels - it would be technically easy to create a node response to the standard pre-amble to respond very quickly - it also allows of Edge caching so Storage Nodes could pre-load shards.

One of the nice things about the random nature of node selection is, it is technically challenging to game this for an advantage [not impossible] - soo as we get into more technical criteria, it opens up attack vectors for abuse.

But that’s my view- all I was saying is I like your concept in principal - no need to say I don’t understand, that’s just rude and aggressive - not that I care :stuck_out_tongue: I still keep posting my views, until Storj ask me not to - didn’t think this was that sort of forum :slight_smile:

1 Like

Storj’s whitepaper strongly depends on independence of failures, and one way to ensure that is making sure erasure shares are placed on nodes as far as possible from each other.

Which is why I suggested bundling audits with these queries.

Please be careful when talking about randomness in presence of a potentially malicious adversary. I’m pretty sure that, let say, Amazon is still capable of fielding an army of nodes on more class C blocks than the whole current Storj network.

In theory this should already be the case, but in practice the connection latency is only a tiny part of the chain and compared to the time to process and store data, it’s ultimately irrelevant.

That said, if it’s better for the network, this shouldn’t stop anyone from implementing that feature. The incentive it creates is good for the network as well as it means the less nodes there are in a certain place, the more you can earn running a node. Which promotes more decentralization.

You omitted the most important word in that quote. @Toyoo said “nodes that won’t fail TOGETHER”. Storj is built to deal with failure as long as they aren’t large and correlated failures. Ensuring wide spread of data ensures that failures if they occur aren’t correlated and are spread out over time.

It’s ultimately irrelevant whether someone uses tricks like you explained to fake a faster ping response as whether you do that or not, the system @Toyoo suggests would still result in the same location seeing the lowest ping times. And nobody is suggesting using it to rank and judge nodes performance. If you want that, you need to base it on time to respond with the data requested. Which could be an extension of how audits work for example. But that is not what is being suggested here.


I’ve noticed someone on reddit linked this paper, which seems to be quite relevant for the idea: Estimation of Internet Node Location by Latency Measurements — The Underestimation Problem (PDF). Linking it here for reference.

1 Like

I think this idea should get more traction! ^^
The /24 subnet thing isn’t really great from an SNO perspective even though it probably gets the Job done at network leve.

Why isn’t this “votable”? :slight_smile:

This is more a research idea than just a single feature. Though, if it will help gauge interest in doing the research, let’s make it votable…

1 Like