How the network works

The alpha test is an internal testing, the beta testing is a test with your potential customers with no guaranty of service. I doubt that you are targeting 25GB up/down monthly customers, they can get free service from anybody else. So open it to your potential customers like netfilix, steam and other streamer networks and validated the concept with them.

Thank you for your suggestion. I can assure you that we will be targeting such customers when our network is ready to support that level of traffic.

Then build a system to support them, we are not anymore in early 2010 where bandwidth and storage was scare. Be looking forward not backward. Unless you to rival with Nokia.

Not sure what exactly you base your assumptions on. How did you conclude we do not have plans to support streaming services? We cannot scale our network up suddenly all at once. I invite you to check our roadmap to find which features are planned to get implemented at which stage. That concludes my comments on this subject as I do not see that continuing with this is constructive.

Well netflix has 15 Mbps which STORJ may sustain for 1 or 2 customers but Streams upload games at 800Mbps to my PC and it’s still quite annoying to have to wait 20m. to get a 80GB game, so far I have seen STORJ nodes going as fast as 30 Mps. Are they merging 30 servers in real time to supply one user, I have kind of a doubt on that.

They actually are! When a file is uploaded to the network, it’s split up in 64MB segments, those segments are erasure encoded and turned into 130 different pieces of which any 29 could recreate the original segment. After the first 80 of them have bee successfully uploaded to nodes, the other 50 transfers are interrupted. (This is when you see an upload failed line in the log)

On download of a segment, the customer gets a list of 35 nodes to download pieces from and as soon as 29 finish, the other transfers are stopped (download failed in your node logs). So your guess of 30 downloads at the same time is actually surprisingly accurate.

However, there is nothing limiting the customer from downloading more than one segment at the same time. And since the next segment is likely stored on different nodes, you could raise that concurrency basically as high as you like.

Based on what I’ve seen so far, hosting game files and offering those downloads may actually be a great use case for the tardigrade network. The whitepaper even outlines possible future upgrades to improve the CDN like behavior of the network by increasing the number of erasure encoded pieces on the network and basically spreading out the stored data over more nodes.

More info on how files are uploaded and stored on the network can be found here. I skipped a few steps in this post to keep it simple.

5 Likes

Yes I see it make a lot of sense on paper. In real life I’d like to see Storj feeding me a steady 800Mbs stream on my 1Gbs link. I’m already amazed that Steams can do it, a silicon valley based compagnie to my EU based computer. Storj has to recover data from an heterogeneous network and feed it to a limited bandwidth user. Quite a feat.

But your very interesting answer made me look into how Storj work. The heart of system are the satellites which are basically disk-less file-servers, where they replaced the SAS interface by internet interface. They save on the HDD cost but spend a lot more on bandwidth as for example a 1TB upload to a centralized server generate 1TB of bandwidth usage on a Storj satellite it generate 3.7 GB of trafic 1TB incoming and 2.7TB outgoing to preserve redundancy and as uploading to Storj is free, its not generating any revenue. While bandwidth and redundant high power front ends is all but free.

Making it look to the SNO’s as a distributed file storage solution while we are just storing data for a very centralized satellite, currently 4 of them worldwide on which we have no information their reliability either hardware or financial is very limit in many ways.

Don’t take me wrong, so far no one is proposing a better distributed file storage solution. I’m with them since 2 years and I would love to share profit with them. As we are both SNO’s I appreciate very much your opinions.

I think you overlooked something essential in your research. The traffic doesn’t go through the satellite. The satellite merely keeps track of where things are stored, but data is directly uploaded to storagenodes by the uplink. So there is no centralization of bandwidth like you’re suggesting. The satellite is more like a traffic controller, it tells the uplink on which nodes to store data and where to get it back, but it’s not part of the actual transfer.

2 Likes

I think this blog post may be of interest regarding the satellite trust issue.

1 Like

Those that mean that when a customer is uploading its 1TB file he will upload 2.7 TB to the Storj network? In which case they should be warned about it, it might increase there quota usage and reduce the upload speed.

As far as I can tell, it does. It might even be a bit more since other transfers that will eventually be interrupted will also finish at least part of the transfer in most cases.

I’d love to see Stardigate opening up to unlimited free customers, we are in beta test, its the time to test worst case scenarios the 25GB/month applied on stradigate users is completely meaningless, to wonder if Storj want to discover the majors flows with paying customers.

Yes @BrightSilence I still don’t understand how the satellites works, are they some kind of very smart dns giving you all the addresses of where your file could reside, then good luck to recover a 800Mbs from them or are they, as you said agreageting a bunch of nodes to supply your stream?

If you want to have a more nodes for your file, you can configure your uplink to do so.
And when you, or users of your shared link will download a file with the same options, they will have a more parallel downloads, than by default

2 Likes

It’s kind of like a registry of all your files. The uplink (customer) asks the satellite for a specific file, the satellite responds with a list of nodes that have pieces of that file. This is all very small and merely an exchange of metadata. The uplink then downloads the required number pieces from nodes directly. Remember that for any segment, any 29 pieces will do to recreate it. Assuming default settings. As @Alexey mentioned, these settings can be tuned for specific use cases. But in any case, the customer downloads the data directly from nodes in a parallel fashion and in that way is able to get much higher transfer speeds by aggregating node transfer speeds, compared to sequential downloading from individual nodes.

1 Like

I’m not sure you are answering me. I only want to retrieve a file quickly and not upload my files 2.7 times. like on AWS.

If you want to download your file quickly, you should store your file on more nodes than by default. The 29 is a default number of pieces/nodes. The Erasure settings is stored in the metadata for your file. I’m not sure is it possible to change this metadata without reuploading.

Maybe Starlink should open up its system while we are still in beta, if I can download at 800Mbs like from Steams or from the newsgroups I would gladly report the statistics, currently I’m very doubtful.

Please, can you clarify, who is

And

I have a feeling that we talking about the different networks

1 Like

Sorry I mean the customer side side of Storj.