Bandwidth utilization comparison thread

Problem solved

mic drop

I find it amusing that the chart spikes right before it drops off. It’s almost like the test data machine pushed itself so hard that it died.

2 Likes

Well there also was someone that was testing some of the functionality of the platform:

Those tests wouldn’t even be a blip on your nodes graph. They’re infinitely insignificant compared to total network traffic.

Almost twice as much repair ingress as usage. I wonder if the ratio of ingress repair to egress repair could somehow be used to estimate the average amount of data stored per node on the network.

1 Like

Yesterday’s numbers, my egress did little better, but still this amount of egress vs stored data is really low…
Maybe creating a VOD service could bump up network usage for STORj…
And it would create more “Stickiness” if it would be priced in let’s say monthly fee…
and probably it was already suggested in ideas thread.

Date IngressT EgressT [GB] StoredT [GB] egress ‰ egressT EgressT [kb/s /TB]
25.07.2020 9.40 3.67 1 842 1.99 42.49 23.07
26.07.2020 14.43 4.00 1 850 2.16 46.35 25.05
27.07.2020 13.41 4.03 1 870 2.15 46.60 24.92
28.07.2020 10.02 4.12 1 880 2.19 47.71 25.38
29.07.2020 9.85 4.40 1 890 2.33 50.91 26.94
1 Like

1 Like

Your egress 39kBs/TB , mine hovering around 25. Nice… though still far from desired 70 or even 50 :frowning:

This would require more CDN like distribution of data. I recommend looking at chapter 6.1 of the whitepaper. https://storj.io/storjv3.pdf
There is a short description of future work that would need to be done to be able to serve such high volume use cases. And implement something that would automatically scale with demand.

In general the white paper is a pretty good read if you’re interested in how things work. (Though it may need some updates)

3 Likes

It would end up being way more expensive because of the cost of egress.

I don’t know if it’s doable but one could create a satellite dedicated to that with adjusted costs for data stored & egress so that the service makes more sense economically.

Let’s say that an hour worth of movie is about 5GB in 1080p resolution. With a pricing at 20USD per Tb of egress that would be .1USD per hour of video, quite a bit more pricey than other streaming services (depending on how many movies you watch per month).

SNOs could then decide if they want to add that satellite to their bandwidth.
I personally wouldn’t be opposed to dropping the price per TB of egress if that leads to increased egress but one would need to crunch some math so see if it’s economically viable (of course if we assume that everyone is using free space on their NAS anything would be viable for SNOs but that’s not really the case.).

EDIT: my math was wrong…

I think that you lost one “0” in calculation - 1TB is 1000GB (not 100GB) it would make a 5GB (more like 720p) - unless HEVC compression, 1080p film is ~8-10GB size so the cost would be ~20 cents/film, so pretty feasible if we don’t factor in the licence costs)

1 Like

Well… what caught my eyes is to be able to serve such high volume - do the least amount of work for as much reward possible - be greedy and lazy :wink: .
Well… I think (form my newbie perspective) that spreading films as short burst of files (like currently - streaming is done in my green eyes) would require some kind of client for viewer. And I think that STORj is already spreading files (by chopping them up on many nodes) isn’t it? (but not sure how it is done, will read and see)
Thanx for the read… will check it out.

you can already upload video to https://alpha.transfer.sh and stream it from there as well. The problem is not the streaming, that was actually taken into account from day one. (keep in mind this is still an alpha implementation)

The challenge is in streaming it to thousands of people at the same time. If you want to do that, it would overwhelm the nodes storing that data. The architecture actually makes it possible to raise the number of RS pieces that are stored on the network, but as of now has no ability to scale that with demand. Right now RS settings for minimum/repair/success/max are I believe 29/52/80/110, but lets focus on 29/80. 29 pieces are needed out of 80 available. If you could raise that 80 available to 1000 available, you could spread the download across more node and be able to server such high demand cases as well. Satellites could use a process similar to repair to create more pieces when demand goes up. But of course, sudden spikes would be really hard to deal with as that expansion in availability will take some time.

1 Like

It would probably need some kind of attribute describing a file’s max copies (not being a global value)
And some statistics/trend tracking apps that can react to increased requests from clients.

Well… STORj could become youtube competitor :scream: or become some kind of educational videos store - where people do VOD (smaller initial demand for flexibility and availability)

It could, especially if you could build such a platform to predict demand and preemptively upload many more pieces to account for that. I mean, youtube may be aiming a bit high to start out with :wink: . But yeah, those kinds of use cases could definitely be supported with some tweaks to the network. It does require some changes to be implemented though, but it seems to me the core functionality is there.

I guess I’ll throw my two cents in as I use to work for a company that did kinda the Plex/Kodi like thing, before plex/kodi got big, and had a “we do everything” ecosystem to it (storage, video ingest, A/V management & tagging, smart transcode, reactive design web app for tablet & phone).

VoD is a fun, high-throughput, thing that actually has a lot of hallmarks back to the days of dedicated hardware at someone’s house that did all of that.

One of the largest things you’ll find is that when you want to stream a video, you’d like to steam it in a natively playable format, if possible, and within the 75-85% total possible bandwidth window or less given a certain resolution and encoding. This is to ensure the maximum compatibility with the playing device while also not ducking the quality too bad or pushing unnecessary bits down the wire. Netflix does a lot of this by storing many copies of the same movie but in each respective native format.

For example, an iOS device I would encode for a TS stream, mp4, and probably shoot for 750K-1.5M chunked files as these can be played natively and are not too overly large to push out through a transcoder with modern hardware. For android, I might do an webm/MKV or if its pre-transcoded a “dead” mp4. You might be thinking that this take a ton of hardware, but we were able to do it on either an Atom earlier years processor be that a Cedarview or Pineview arch, per stream or two. The server handled some of the storage, if there were no NAS part of the setup, and the UI’s and management tools, but the players would take care of the streams as they were much more capable.

So… OK, you store the top 3-4 most common formats&transcodes and you call it a day there- next is the egress bandwidth that was touched on.

Examples will do best on this point: On a yacht, the most we saw was just below 1Gbps total aggregate throughput, sustained, due to the number active players going (>10 players, 720p24/1080p24p mixed). The low side is about 1.5-2.4Mbps for a singular stream being played locally at 720p24.

So lets do some napkin math here: we’ve got 8k plays per month (~267 plays per day), 4 pre-transcoded versions (1080p@4Mb|720p@2.5Mb|480p@1.5Mb|480p@0.9Mb) and you think about 45% of 200 different uploads are going to be viewed per month and 720p being the most commonly viewed format… you’re looking at about 3TiB of storage and 6-7TiB in total bandwidth, which given Storj prices would be about $21.90 in storage and $315 for egress through tardigrade- So for about $337/m you might be able to spin up a micro VoD service, sure.

4 Likes

My nodes are back up after the ISP switch (finally) I’ll start posting my stats again tomorrow after I have a full day’s worth.

3 Likes

Looks like this little node will earn about $4.87 USD for the month.

1 Like

45 here, but that is just about the electricity costs .... :smiley: but again the node is only 5 months in about a week.... ofc half of the 45 is held back
so… only half is actually paid for now… so still very much in the hole…

and not even counting wear, internet usage, my time… but lets call it an experiment, and because the number of drives are still very small it won’t give a proper payout… after all this setup should be able to run hundreds of drives… but maybe ill replace the server because i’m pretty sure it’s actually my insane server fans running full tilt that is pulling 2/3’s of the power…

and really if i get like 100 drives… then i might be down to 3 Watts pr hdd and i can do realistic maybe a series of disk shelves maybe 10… x36 bays… so 360 drives… then the overhead of electricity would be 1/10th of the cost of electricity pr hdd…

but ofc at that point we run into the whole… the hardware costs more than the electricity… because harddrives are fairly expensive and fairly cheap to run…

like say if a HDD can hold out 6 years before its basically dead in most cases, running 24/7
thats like 7W x 9000 hours in a year so 61Kwh
and the price here is like 0.33$ so 20$ a year and then 120$ for the hdd full life time…
so yeah electricity isn’t the expensive part when doing storj…
in theory…

1 Like

Unless you have a server with screaming fans like you hahaha

1 Like