Upload/Download speeds

I was wondering if there have been any official/unofficial benchmarking of the standard uplink CLI vs S3 upload and download to Storj.

For download, is the value of added Fastly so that client aren’t limited to the S3 gateways provided by Storj? Are there any plans to add a client side-JS library so that users can access Storj downloads from the web?

Thanks!

Hier is a thread about benchmarking:

You can use the Linksharing Service or create share URLs with the Uplink CLI.

Specifically on this topic if you are asking for individual client browsers to download directly from storagenodes it has been discussed before - JS library for the browser - #7 by jtolio

TLDR - There are currently technical limitations due to the browser, the storagenodes and trust/certificates.

This is helpful from a system performance perspective.

How about for bandwidth speed? My files are small < 50mb each. I am currently running two files running in parallel to S3. Is there a S3 proxy speed degradation rather than connecting to satellites directly for Uplink.

I would expect a S3 proxy speed degradation, since its an extra layer on top of Uplink. It might be neglectable though. @Dominick might have more experience with that.

Only if the user is not bottlenecked by having to upload all stripes.

Performance depends on what you are transferring first and foremost. Today there is no perfect tool covering all situations.

Uploading
I tend to use rclone (native or hosted) for uploading most of the time.

  • 1000x <64MB file upload. If you want to upload 1000 small files very fast we have two options. The best option would be rclone via gateway mt (option 4 and option 13, any other s3 compatible provider). Tune --transfers up until you saturate your upload bandwidth, do not increase past this point and it just creates overhead.

  • 1000x <64MB file upload, native alternative. As above you can upload nativly if you have the compute and bandwidth to cover the 2.68x data expansion from erasure encoding. Select option 34 Tardigrade Decentralized Cloud Storage. This will be slower unless you have a lot of bandwidth and CPU cores. The big benefit is you are doing the encryption client side.

  • If big files tune --concurrency as noted in hotrod article and limit --transfers to as little as 1. Be sure to set chunk size --s3-chunk-size 64M

Downloading

For downloading I use Uplink when it’s a single large or a few large files and rclone when its a bunch of files. With uplink again you will be CPU limited so scalae --concurrency to 1.5-2x your cpu core count (not threads). If you have a big server you might be able to be slightly faster than our hosted s3 endpoint. We are talking multi Gb/s, no issues.

5 Likes