Upload/Download speeds

I was wondering if there have been any official/unofficial benchmarking of the standard uplink CLI vs S3 upload and download to Storj.

For download, is the value of added Fastly so that client aren’t limited to the S3 gateways provided by Storj? Are there any plans to add a client side-JS library so that users can access Storj downloads from the web?

Thanks!

Hier is a thread about benchmarking:

You can use the Linksharing Service or create share URLs with the Uplink CLI.

Specifically on this topic if you are asking for individual client browsers to download directly from storagenodes it has been discussed before - JS library for the browser - #7 by jtolio

TLDR - There are currently technical limitations due to the browser, the storagenodes and trust/certificates.

This is helpful from a system performance perspective.

How about for bandwidth speed? My files are small < 50mb each. I am currently running two files running in parallel to S3. Is there a S3 proxy speed degradation rather than connecting to satellites directly for Uplink.

I would expect a S3 proxy speed degradation, since its an extra layer on top of Uplink. It might be neglectable though. @Dominick might have more experience with that.

Only if the user is not bottlenecked by having to upload all stripes.

Performance depends on what you are transferring first and foremost. Today there is no perfect tool covering all situations.

Uploading
I tend to use rclone (native or hosted) for uploading most of the time.

  • 1000x <64MB file upload. If you want to upload 1000 small files very fast we have two options. The best option would be rclone via gateway mt (option 4 and option 13, any other s3 compatible provider). Tune --transfers up until you saturate your upload bandwidth, do not increase past this point and it just creates overhead.

  • 1000x <64MB file upload, native alternative. As above you can upload nativly if you have the compute and bandwidth to cover the 2.68x data expansion from erasure encoding. Select option 34 Tardigrade Decentralized Cloud Storage. This will be slower unless you have a lot of bandwidth and CPU cores. The big benefit is you are doing the encryption client side.

  • If big files tune --concurrency as noted in hotrod article and limit --transfers to as little as 1. Be sure to set chunk size --s3-chunk-size 64M

Downloading

For downloading I use Uplink when it’s a single large or a few large files and rclone when its a bunch of files. With uplink again you will be CPU limited so scalae --concurrency to 1.5-2x your cpu core count (not threads). If you have a big server you might be able to be slightly faster than our hosted s3 endpoint. We are talking multi Gb/s, no issues.

6 Likes

For interest sake I thought I’d post the relative speed (based on time used) differences that I experience between BackBlaze B2, rsync.net, and Storj (using S3 compatible). The transfers all start at the same time on different evenings. My uplink is only 10Mbps.

B2:

Cron <root@nas1> /root/backup_scripts/rclone_B2.sh (root <root@nas1.localdomain>)

****===== Starting rclone sync of home_bkp to BackBlaze B2 =====******
            Mon Feb 28 03:20:01 UTC 2022
****===== Starting rclone sync of pictures to BackBlaze B2 =====******
            Mon Feb 28 04:17:15 UTC 2022
****===== Starting rclone sync of archive to BackBlaze B2 =====******
            Mon Feb 28 04:37:58 UTC 2022
****===== Starting rclone sync of documents to BackBlaze B2 =====******
            Mon Feb 28 04:38:01 UTC 2022

Rsync.net:

Cron <root@nas1> /root/backup_scripts/rclone_RSYNCNET.sh (root <root@nas1.localdomain>)
****===== Starting rclone sync of home_bkp to RSYNCNET-encrypted =====******
            Wed Mar  2 03:20:01 UTC 2022
****===== Starting rclone sync of pictures to RSYNCNET-encrypted =====******
            Wed Mar  2 04:29:50 UTC 2022
****===== Starting rclone sync of archive to RSYNCNET-encrypted  =====******
            Wed Mar  2 04:31:26 UTC 2022
****===== Starting rclone sync of documents to RSYNCNET-encrypted =====******
            Wed Mar  2 04:31:35 UTC 2022

Storj (using S3 compatible):

Cron <root@nas1> /root/backup_scripts/rclone_STORJ-S3.sh (root <root@nas1.localdomain>)
****===== Starting rclone sync of home_bkp to Storj.io =====******
            Fri Mar  4 03:20:01 UTC 2022
****===== Starting rclone sync of pictures to Storj.io =====******
            Fri Mar  4 05:56:44 UTC 2022
****===== Starting rclone sync of archive to Storj.io =====******
            Fri Mar  4 09:10:52 UTC 2022
****===== Starting rclone sync of documents to Storj.io =====******
            Fri Mar  4 10:48:31 UTC 2022

How should your results be read?

Is Rsync the fastest (except for “pictures”), then comes B2, and Storj is by far the slowest of the three?

That’s correct. B2 and Rsync are usually about the same duration. Storj is consistently much longer to upload. The rclone options used do vary slightly between Storj and the others as I was given some tips on optimizing Storj upload. For example:

B2:

rclone sync --quiet --syslog --bwlimit 512k:off --transfers 4 --fast-list --b2-hard-delete --delete-during /tank/personal/pictures B2-encrypted:/pictures

Storj:

rclone sync --quiet --syslog --bwlimit 512k:off --transfers 4 --s3-upload-concurrency 1 --s3-chunk-size 64M --delete-after /tank/personal/pictures STORJ-S3-encrypted:pictures

Is that enough to fully use your 10 Mbps upload? Just out of curiosity.

I’m limiting the rclone bandwidth to 50% of the ISP provided 10Mbps so that the other applications continue to function (including a Storj node) and based on the monitoring it fully utilizes the 5Mbps.

1 Like

If you want to check upload and download speed so you can easily check by online at speedtest.net

The speed in the network depends on the speed between the customer and the particular node. It’s almost irrelevant to generic speedtest.
You also cannot be close to everyone customer in the world, so this measurement is useless for Storj.

2 Likes

Which gateway were you using?

I was using https://gateway.us1.storjshare.io

Just for side thought, just tried uploading a 4K video file to storj, and it’s considerably slower than google docs or Cloudflare stream. It’s incredibly slow. Because of course it would be, when you upload using Google or Cloudflare you’re using a hyper tuned server built for ingesting data.

Would prefer the marketing team here didn’t make their website seem like this was a S3 + cloud front killer for video.

It’s better to use rclone or uplink CLI instead of browser.
See also Hotrodding Decentralized Storage

1 Like