Does storj even work for VIDEO STREAMING?

I think this is unrealistic to expect, Even locally It takes this long to pull a video from my network everything would need to be cached to expect less then 1sec load times…

1 Like

I tried the 1080p video, it works fine, but there is the delay of 1.5 seconds, which may be too much for some.

However, there is another problem - the video is provided as a single large file, not HLS. It works just fine when the internet connection is good, but it would fail if the internet connection was marginal (say, 4G). HLS is better in this regard since the small segments get downloaded in separate connections, it is less likely for the download to stop because of packet loss.

Of course, the video could be uploaded as HLS segments.

Storj also needs to update their node T&C, so I guess both are on par in terms of updates of legalese (-: Thank you for pointing out this note. I’m rather conservative in terms of legalese, so I’ll wait until CloudFlare actually explicitly puts the terms, but it’s good to know that they at least attempt to.

I wonder how much of a difference is the way the uploader uploads the contents and location differences between the uploader, the gateway and the downloader.

In the scenario where the uploader uses an MT gateway, I assume the chunks will mostly be stored closed to the MT gateway, and hence users far from the gateway will experience bigger latency.

In the scenario where the uploader uses libuplink or a local gateway, I assume the chunks will mostly be stored close to the uploader, and hence users far from the uploader will experience bigger latency.

A true CDN that aims to be fast for downloads around the world would probably actually try to mitigate the effect of upload races?

BTW: I only note what I see, people can make their own conclusions on whether response times are important for their video delivery or not. Generally I just follow the data released by Netflix, Youtube or Twitch as to what their metrics are for performance/watch time.

It’s absolutely not unrealistic to expect, a CDN for video can get first response time under 100ms. Youtube when I test it can most of the time be around 30ms on it’s best days. Which is standard for CDN response times. https://www.cdnperf.com/

That is server response time, not video start time, video start time will normally be 2-3x the server response time, if a server can respond in 30ms, than 120ms is generally when the first fragment will start.

Such as with this storj reply, the 2.5 seconds become 8-9 seconds. The “waiting for server response” is the most important.
Screenshot 2023-01-14 at 9.51.48 am

I am going to upload a HLS test of the big buck bunny to google cloud CDN also, just having a CORS issue with it.

Here is just an MP4 on google cloud CDN.
[Direct from a Google Bucket without CDN] ← currently busted, thanks google…
(https://storage.googleapis.com/testbbbvideo/bbb/bbb.mp4)

Response times are faster for HLS/DASH as the payload is not the container overhead + the file size is a few MB.

I wouldn’t recommend Google CDN unless you like burning money, cause it’s extremely expensive.

But you cant really compare storj to companies that have massive servers around the world that are equally powerful and optimized to do that exact thing. Netflix has been working on this for more 10 years for streaming they used to use CDN which wasnt fast enough then moved to openconnect which they designed for netflix… Storj wasnt going for streaming when they first released and its capable of streaming not the fastest because theres much work to do before it can even compare to netflix or youtube as a streaming service.
But the problem is youtube and netflix do not use the biggest video files for streaming, I host my full 80gig rips on storj then stream it though my plex which is transcoded by my server and it works very well.

But I do find it very interesting finding way to actually speed up the streaming process if you have better ways to do it im very curious how you can reencode movies to make it even faster to stream.

1 Like

For personal use I think you would be fine, when paying as a business that makes money on it’s watch time or response times, than that’s different.

How Storj Built the Fastest and Lowest Cost Cloud Video Sharing Option
Article

TBH, I can, because of the articles written by Storj about being the fastest (Of which I came from out of curiousity) & because if performance isn’t the benchmark for video than what is? A CDN is a CDN.

Storj Has the Fastest Video File Sharing and the Lowest Cost

Maybe its miss understanding or miss writen on there part, Because when I look at this it isnt based on a streaming service but as a video sharing ability.
There not saying there the fastest streaming service but the fastest video sharing for the price.
Cause from my experience it could be alot faster if you were gonna use it for streaming. Its not easy to get the fastest speeds when using user friendly ways.

I better not answer that with dollars facts of which companies would beat that, because if i was to use another service that I know is faster, it would cost me $1.2USD to deliver that without an egress limit. The same on storj would be $7/TB.

I honestly think, if others have noted performance times, and I know I can prove it (I mean, you’ve seen google cloud, cloudflare & bunny in here, as well as I noted how to play HLS via Storj when the team didn’t know), making statements about being fastest/cheapest is just a better idea to change that terminology, to most distributed, or most decentralised. But fastest/cheapest, it is not, and being the cheapest, isn’t a metric, being fastest in a CDN is generally always the metric.

:face_with_monocle:

I dont disagree with you they really shouldnt claim something less they have 100% proof to back it up. I cant even upload to storj half the time at the fastest speeds my internet can handle because its alot more complicated.
When i first joined storj there was no backed data about a streaming services one thing I do know is it can be fast if used a different way then I use it. The thing is you just cant become a streaming serviced without alot of powerful hardware, I know half of the people on storj hosting nodes dont have high end hardware me included because im cheap and I like raspberry pis and SOC hardware.
If there was a second storagenode people could run only on high end hardware/internet speeds they could easily match what they claim for streaming.

Yeah, if Storj had a few distributed main nodes, and then handed off the other delivery to the community nodes, that might be a way to make that first time to byte response time faster.

I am still playing with it, including delivering from a CDN of the first few TS files then using storj to pull the rest. But I dunno, that’s generally not a great idea in practice, sorta like loading a website with heaps of different JS files from different CDN urls.

Time will tell though storj are making strives to make improvements to the service which they have I wasnt able to stream movies from them 2 years ago and now I am, Its been a ride and a journey with them and hopefully someone with good ideas such as yourself can help improve on what it is today.

3 Likes

Absolutely, I think partly decentralised + decentralised is a good option.

For future, my benchmark tests are below & if can make any difference I will update:

h264
CDN delivered 1min of Big Buck Bunny in 4k h264
Stroj file of same 1 minute Big Buck Bunny in 4k h264

h265 (1/3rd the size)
4K 30 seconds Big Buck Bunny H265 CDN
4K 30 seconds Big Buck Bunny H265 Storj

Goal is to get Storj (Grey) to match the CDN (Green) Lower is better.
h264

h265 (requires Chrome, or VLC)

Storj files deliver from computers HDD, CDN delivers from cached server from an interconnect exchange.

Encoding H265 preset

./ffmpeg -i input.mp4 -c:v libx265 -preset ultrafast -crf 28 -c:a aac -b:a 250k output.mp4
2 Likes

Stork may have some properties of a CDN, but it’s not a CDN in the traditional sense. Their marketing also doesn’t claim this. Compared to other storage solutions without CDN, the performance is great. Now you could call that creative language and I would agree, but it’s not necessarily wrong.

Though some of the comparisons with large established platforms just seem unrealistic to reach for. Companies who pay massive amounts for peering agreements or even hosting cache servers at ISPs are always going to be a step ahead. But also costly. The upside is that Storj doesn’t necessarily need peering agreements because the decentralized nature avoids bottlenecks already. But they can’t possibly beat cache servers hosted at ISPs.

2 Likes

Servers are hosted at interchanges or exchanges at data centres for a reason as there is meters between all the connected servers of the world via Fibre uplinks @ 100GB+.

Let’s make sure we don’t make misrepresentations of the product that go against the wording of the product.

Are you a developer? Or a holder?

Try to stay objective to follow the title, Video uses CDN’s, your own server cache demo made using NGINX + a few servers would have better performance than 2+ seconds TTFB.

Heres what I found in 1-2 minutes of looking around the website.



Node operator and enthusiast developer. I wouldn’t call myself a holder. I get paid in STORJ, but I see that as a means to an end, rather than an investment.

I guess they do mention CDN-like performance, which I agree is a stretch.

To be honest, I bother little with reading the marketing language. I’m more interested in the white papers and design docs. There are properties of CDNs that the network has (mostly when used natively) but ttfb isn’t one of them. Distribution and throughput though, yes.

1 Like

@what you might want to have a look at the improvements discussed here: Regarding the «Noise over TCP (uplink to storage node)» document

Edit: new lick with request for feedback here: Two new blueprints/design drafts seeking feedback: Replacing TLS with Noise and TCP_FASTOPEN

Both of those could significantly improve TTFB in the future.

2 Likes

A true CDN that aims to be fast for downloads around the world would probably actually try to mitigate the effect of upload races?

I agree, and suggested this in Oct of '21:

3 Likes

It would actually be cheaper for Storj to use R2. Which is why it’s ultra important to reduce latency for video, as R2 cannot compete on FTTB if storj was tuned for extremely fast loading from cache times. But I guess CF CDN (Free with R2) + R2 for $0.015/GB is still probably faster for FTTB and cheaper.

I’d trust the SLA & terms of storj more though, good god do CF love to boot people randomly.

Believe there would be more money in distributed CDN as a service than storage and storj has a huge head start. But latency & load times for media are extremely important.

Just for example, if you’re serious about video, and you’re profitable or a media company that will be profitable, you’re thinking, Akamai (Binge, Disney+, Foxtel, HBO Max, Maybe Netflix in certain regions), Cloudfront (integrated with S3, Netflix uses S3) etc Probably less Bunny, Cloudflare or Google. Maybe R2 will change that. But Disney surely is not going to move their libraries from S3 to R2 just because the egress is free, they’re hyper tuned these companies for delivery, every ms counts in tens of millions of dollars.

R2 is definitely not meant for the scale of anything close the video platforms you mentioned. They’re really cagy about their actual limits, but you can expect to be booted if egress consistently outpaces the stored amount of data. Their free egress is also kind of shady, as they still charge per operation. And in a streaming video context, I really wonder whether that ticker goes off with every new section of video requested by the client.

Now I’m not gonna claim Storj is ready to deal with customers of that scale either. But I think there is potential to get there. Check out page 63 of the white paper. https://www.storj.io/storjv3.pdf
It describes how automatic demand based scaling could be implemented. Since this whitepaper was written, Storj has already created functionality to direct pieces to a certain geographical area and this could in theory be used to do location based on demand scaling. Which could be really powerful for TTFB improvements. Especially if they are able to also start tiering nodes based on response time and put time sensitive data like video streams on the fastest nodes in the area. That would further incentivize node operators to speed up their systems as well.

I haven’t hear them mention this feature much lately, but I’m still hoping they plan to implement it to expand the types of use cases Storj can serve. Features like this could basically give the network native CDN like capabilities. (btw, I sure hope the part about pausing access temporarily can be avoided. That doesn’t really sounds acceptable.)

2 Likes

There is no egress limits on the developer terms, because the locations are 100GB+ links, they are clear about this in their discord. Reads are category B and don’t cost much to anything per read. It’s insignificantly small, would be dollars, maybe max hundreds of dolalrs in PB scale.

The cache to the CDN (Also free) is where the terminology from legal might be up for interpretation, but the R2 product is not under the same CDN caching terms, it’s under the developer terms, it’s free without egress limit, they say (CF), in their terms & in their discord, and from their CTO on product hunt.

But if you really push it, you might get an enterprise call. Too which, is common at PB+ scale. That’s a lot of data moving.

I do hope to see Storj answer for TTFB, they’re obviously working on it.