Distribution of erasure pieces of one segment

Hi Everyone,

I was just wondering since in some of the previous thread discussions it says that 95% of the ingress traffic goes to the vetted nodes of the network and the rest 5% goes to the unvetted nodes of the network. I am just wondering how this distribution % correlates to the erasure piece distribution. Since for every segment, there are 80 pieces then how are these pieces being distributed in the network? Does it mean for every segment of file F: 95% of the erasure pieces go to the vetted nodes and the rest 5% of the erasure pieces go to the unvetted nodes?

That’s exactly what it means. Though it initiates 110 uploads, so it’s probably 6 pieces to unvetted and 114 to vetted nodes. If those 6 nodes happen to all be among the fastest 80 you technically could end up with at worst 7.5% of pieces on unvetted nodes for a segment. Though it could also be 0% if none of them are among the fastest 80.

Either way on a per segment basis, it’s never higher than that 7.5%, which would be absolutely no problem even if all those pieces are lost. Which is unlikely to happen to begin with.

3 Likes

Thanks! That really helps!

@BrightSilence can you please explain what you actually mean by the fastest 80 nodes? What I understood is that if there are 110 erasure pieces per segment, then each of them will be sent to distinct nodes which can be either vetted or unvetted. So 6 pieces out of 110 are fixed for every segment to be sent to unvetted nodes? What does it have to do with the fastest 80 nodes?

Storj uses long tail cancelation. Only 80 pieces are actually kept. Once 80 pieces are uploaded, the other transfers are canceled. This ensures that a few slow or unresponsive nodes don’t slow down the entire transfer.

1 Like

Okay. So what would happen to the rest of the 30 segments? From the forum discussion:Trash comparison thread
I understand that long-tail cancellation is different than that of the zombie segments.
So what is the actual purpose of creating an extra 30 segments? and where do the segments after long tail cancelation goes?

Many of those never finish and don’t get stored on nodes at all. If the transfers do finish, garbage collection will clean them up later. The only purpose is to make sure that all transfers finish fast.

If you would only initiate 80 transfers, a single slow node could slow your upload speed to a crawl. Or worse, an unresponsive node could make it fail altogether. By initiating 30 more transfers there could be up to 30 slow or unresponsive nodes without compromising on transfer speed.

Zombie segments are different and some stuff has probably changed since the post you linked. But zombie segments only exist if the first segments of a file finish, but subsequent segments either fail or get canceled. Though those too will be cleaned up by garbage collection.

1 Like

Okay. Thanks!
So for every segment uploading is the number of erasure pieces going to the unvetted nodes fixed to 6 or more?

It’s 5% of those 110 initiated transfers. So yeah, depending on how it’s rounded it’s 5 or 6. Of course whether they get to keep the piece depends on whether they were among the 80 fastest to finish.

2 Likes

I have another follow-up question. So it is assumed that even if there are at most 30 unresponsive nodes while uploading the erasure pieces of one segment, then also the transfer will happen successfully. What will happen if while uploading the pieces of one segment there are more than 30 unresponsive nodes? How did Storj fix the buffer of 30 pieces/segments?

This doesn’t really happen. The issue is usually on the upload side where the client is too slow uploading, so not enough pieces are saved to the network. In that case the uplink client will error as below, and the satellite has a similar check as well.

2022/04/09 08:03:01 ERROR : ...ring.gpg~: Failed to copy: uplink: stream: ecclient: successful puts (46) less than success threshold (80)
2022/04/09 08:03:01 ERROR : ...ring.gpg~: vfs cache: failed to upload try #1, will retry in 10s: vfs cache: failed to transfer file from cache to remote: uplink: stream: ecclient: successful puts (46) less than success threshold (80)

There’s nothing to fix. 110 pieces are never stored on the network, and don’t need to be. The erasure coding means only 29 pieces, of the 80 (or 110) are needed to recreate a file.

2 Likes