Nexus Mods dealing with degraded performance due to demand spikes related to sudden Fallout popularity. Can Storj be their solution?

*When downloading segments in parallel.
That’s the big caveat for that statement. If the gateway currently doesn’t do that, we have the culprit. And I still think this is a lot more likely than the connection between gateway and client being the issue.

3 Likes

If it’s the end user, download managers, or internal browser features.

2 Likes

Are the gateways hosted on google cloud?

Well, there’s the Bandwidth Alliance, maybe Storj can work out a deal with members of it.

None. This is a client-side option unfortunately. You need to request several segments in parallel to increase speed, because the S3 gateway takes data from the network segment by segment, otherwise it would be hard to seek inside the video file for example, when it downloads segments in parallel, likely it will saturate either gateway or your connection.
So, seems parallel downloads perhaps in the conflict with the feature if you want to have seeks inside the file without a full download.
Thus I suspect that this could be never implemented to do not break the other popular functionality, so I guess using downloaders is a best solution for big files downloads so far.

I found this one improvements on our roadmap though:

It should reduce or remove gaps between segments requests, this in turn should speedup a one thread download speed too.

Perhaps IX system can use a Downloader in their TrueNAS for updates. However even now it’s faster than it was when they didn’t use Storj.

As I said - with downloaders. Any.
By “downloading several segments in parallel” to increase speed I mean exactly this and this is the answer on your question how to saturate your connection using the link share link (or a presigned URL directly from the gateway - doesn’t matter), even using uplink without a parallelism will not give you much more speed than from the gateway.
See the prove:

If you are able also increase the size of the segment and/or number of parallel segments downloads, speed will be limited only by your hardware.

You need to use a Downloader, which is able to request regions of the file (segments) in parallel, then combine them to the whole file on your filesystem.

I’m not following the logic of it.

A file is made up of pieces 1-100, which are in turn made up of segments 1-100. (let’s go with a round number just for argument’s sake).

I as a client start requesting my file starting with piece 1. I’m expecting the gateway to request the 100 segments in parallel to “build” that piece that I’m requesting before sending it back to me (this translation is the actual role of the gateway, is it not?). What I suggested we could improve is that instead of waiting for me to request piece 2 before actually going out, fetching the 100 segments associated with it, build it, and start sending, we could instead anticipate that “if the client requested piece 1, he is very likely to request piece 2 as well”. Prefetch the necessary segments to build piece 2, build it and have it ready when the client requests it.

If I’m misunderstanding something, please correct me.

Choose the server from Australia or South America. Will it do so?
These CDN puts their servers near you, maybe in your town, Storj is distributed globally. The one thread download likely would have the same speed in any point of the world. If you can increase parallelism, it will be fast anywhere too without additional costs.

If you compare a bittorent download, then you must use downloaders for the link to get the comparable speed.
And native connection will not help, if you would use only one thread as proven above.

But how did transfer.sh do it then?
It seems to be offline now but I can remember it to have very good speed when downloading. I even received feedback from other people I referred this service too that they were impressed by the speed. AFAIK they did store on Storj.

The gateway will not request 2-100 in parallel, it’s a client’s responsibility. If your client do not request them in parallel, the gateway will not too.

What you are suggesting is likely possible to implement, but in turn it will make a gateway an incredible expensive - it must has at least a storage to keep all that data locally, also impact on CPU and RAM. And now multiple it on hundreds thousands requests. I do not believe it would be implemented.

Any site may do so, but they need to implement a multipart download on their site.
If you would setup a CNAME to our linksharing instead of your own hosted server, you will not get the same, unless you would use a multipart downloader.

You misunderstood me. I’m not saying prefetch the entire file (all pieces of it 100x100 segments). I’m saying prefetch only piece #2 (= 100 segments associated with that piece only) in anticipation of the client requesting it. If the client cancels a download, the next piece is wasted. If not, the piece is ready to be sent without any further delays. When piece 2 is starting to send, prefetch piece 3 (the next 100 segments) and so on.

It could be considered as an improvement in the future. And the mentioned roadmap item is likely about it.

2 Likes

My point that to increase speed of download you need either use a Downloader with a parallelism or setup a server which will do a multipart download for you, proxying the resulted file through the close to a local network to your browser.
This one will be your personal CDN server.

If you want to have less costly solution, the Downloader sounds like a more simple and cheap option.

Not sure why setting a browser flag wouldn’t be an option here… the user can do that.

I think it’s the opposite, i.e., a segment is up to 64 MB big and consists of up to ~2.3 MB pieces. But let’s go with your naming…

…and then you get HTTP requests cancelled midway, while the satellite has already started downloading all parts of file.

This is then a perfect opportunity for an amplification attack: a “client” uploads a large file, like hundreds of gigabytes. Then pretends to fetch it many times, and cancelling after few megabytes. Storj satellite is effectively working as an amplifier for the attack vector, triggering many times the bandwidth usage of the attacker’s.

Now there’s an additional question: will Storj pay for these downloads, if the client actually cancelled the download after the first few megabytes? If yes, then Storj is again back paying more for bandwidth than they can charge for. If no, node operators strike.

The customer is charged for any usage confirmed and signed by the libuplink and by the node, even if the download of a whole object is not finished.
The libuplink allocates some bandwidth, then starts download. If it’s canceled, it’s settled for what is actually used and node will have an order with that settlement, the node will sign it too and send to the satellite. The satellite will charge the customer on settled order and pay the amount for used egress to the node.

Thank you for your suggestion. This thread is passed to the team.

It will be decided by a sale team. So, thanks for your thought.

there are a lot of solutions for every problem, so I would hope it wouldn’t be missed.

1 Like

Good Lord it’s good to have dissenting voices like yours but blimey, you come across angry all the time!

3 Likes

He has a green avatar and his avatar is always force me thinking about other green, big, and which we should not feed, and he has something against Storj (I do not know what and why), but sometimes he passes great ideas, so I force me to do not think about that other “green, big and…”.

Sorry!!!