Storj as backbone for YouTube replacement?

Is Storj designed such that it could handle operating as a backbone platform for a YouTube replacement service like VOICE?

Both Vitalik and Blumer have recently expressed interest and are encouraging an end to tribalism within and between the ETH and EOS communities.

Has anyone on the Storj team reached out?

1 Like

Tardigrade does support native video streaming already. I would suggest that the interested parties contact Storj Labs, we have a partner page which they can consult and if they decide they want to pursue a collaboration, they may submit their cooperation proposal on the supplied form for open source partners or technology partners.

1 Like

How Storj suport it? when you watch video you download it part by part, but if it encrypted, you need to download it all and then wahtch it, but if you on 4g or something slow it takes long time to download 1GB video

Data is streamed in parallel from a network of thousands of nodes. Data striped at the bit level to support seeking and high performance use cases. You can find this described in the white paper. files_segments_stripes_erasureshares_pieces|631x500

You can stream videos live with pretty decent performance. This performance will get even better as the network grows. Limiting factor right now is concurrent requests (if the video gets ‘hot’, its performance will degrade). This will be solved when we implement hot file autoscaling (described in whitepaper).

I have streamed videos live in venues across the world - and haven’t had any problems.

Just put out a tweet about it here: https://twitter.com/storjproject/status/1210238130365882370?s=20

1 Like

Hmm it will be good to deploy Youtube on Storj

1 Like

Perhaps Storj reaching out to Vitalek and Blumer/Larimer and letting them know would be a good thing? :slightly_smiling_face: :grinning:

1 Like

Vitalik has been aware of our project already since it first started. Other interested parties have already been mentioned in our tweets

1 Like

It would be an awesome collaboration and amazing opportunity for egress earnings for all SNO people. Let’s all hope something good happens! :grinning: :slightly_smiling_face:

1 Like

RS settings during upload could already be tuned to support CDN like scenarios right? It’s just that right now you have to predict at time of upload what kind of availability you will need. So it doesn’t yet handle fluctuations in popularity.

Exactly, RS can be tuned to support high volume use cases today if high-demand volume is consistent or anticipated. But what if you were serving an Indie video game that gets posted on Reddit and randomly goes viral. This would likely result in a 502 error if demand spike was not planned for.

In this kind of scenario, Hot File Autoscaling will eventually make things much more efficient.

Here is the feature overview from Section 6.1 in whitepaper.

Occasionally, users of our system may end up delivering files that are more popular than
anticipated. While storage node operators might welcome the opportunity to be paid for
more bandwidth usage for the data they already have, demand for these popular files
might outstrip available bandwidth capacity, and a form of dynamic scaling is needed.

Fortunately, Satellites already authorize all accesses to pieces, and can therefore meter
and rate limit access to popular files. If a file’s demand starts to grow more than current
resources can serve, the Satellite has an opportunity to temporarily pause accesses if necessary, increase the redundancy of the file over more storage nodes, and then continue
allowing access.

Reed-Solomon erasure coding has a very useful property. Assume a (k, n) encoding,
where any k pieces are needed of n total. For any non-negative integer number x, the first
n pieces of a (k, n + x) encoding are the exact same pieces as a (k, n) encoding. This means
that redundancy can easily be scaled with little overhead.

As a practical example, suppose a file was encoded via a (k = 20, n = 40) scheme, and
a Satellite discovers that it needs to double bandwidth resources to meet demand. The
Satellite can download any 20 pieces of the 40, generate just the last 40 pieces of a new
(k = 20, n = 80) scheme, store the new pieces on 40 new nodes, and—without changing
any data on the original 40 nodes—store the file as a (k = 20, n = 80) scheme, where any
20 out of 80 pieces are needed. This allows all requests to adequately load balance across
the 80 pieces. If demand outstrips supply again, only 20 pieces are needed to generate
even more redundancy. In this manner, a Satellite could temporarily increase redundancy
to (20, 250), where requests are load balanced across 250 nodes, such that every piece of
all 250 are unique, and any 20 of those pieces are all that is required to regenerate the
original file.

On one hand, the Satellite will need to pay storage nodes for the increased redundancy,
so content delivery in this manner has increased at-rest costs during high demand, in
addition to bandwidth costs. On the other hand, content delivery is often desired to be
highly geographically redundant, which this scheme provides naturally

4 Likes