US2 Beta Test Request (Gateway MT)

We much appreciate the members of the community that have been testing our US2 Beta. We wanted to reach out and define a specific use case that we would love to have further testing focused on.

S3 Backwards Compatibility
The US2 Beta has a hosted S3 compatible gateway and thus you should be able to use it with any service that can use S3. When using the hosted gateway the erasure coding occurs server side so a 1GB upload is only 1GB of edges from your computer and not 2.78GB when doing the erasure coding locally. This means you can upload to more services than before and at a much faster rate.

Please connect to services that are S3 compatible and report your findings. We are curious about any incompatibilities.

Examples

Instructions for AWS CLI
AWS CLI

Signup for US2 Beta
US2 Registration Form

2 Likes

Got a good test case? Want more than 50gb, please reach out and I will assist.

We’d also appreciate feedback on compatibility with the Amazon language bindings. If it’s S3 compatible, it should work with our gateway.

We’re really interested in early feedback. This has been a consistent request from customers and partners who haven’t wanted to run their own gateway. This new service will dramatically increase the number of use cases we can serve with the platform and will make it a lot easier to use.

Please let us know how it works for you and all feedback is wanted and welcome!

Thanks!

2 Likes

Veeam Backup and Replication - Enterprise Plus - V11.0.0.837

Swapped out an S3 bucket for the Storj - amazonS3://gateway.tardigradeshare.io

Using Veeam Scale-out Repositories - Can only select Storj as the Capacity Tier (which is expected)

Browsing the S3 bucket, I can create directory’s under the buckets - which is cool, as allow re-use in multiple backup repositories.

Run an active Backup, and it says the backup worked :slight_smile: would say Veeam having issues with the S3-Bucket capacity - It’s reporting that it’s capacity is 1024GB, but I though we only given 50GB in the US2 Beta ?

Backup Job ran with no errors - only comment is that the throughput is very spiky, as shown below on graph - would usually expect this to be a more smooth and flat.

Going to leave it running Dev hourly incremental, with synthetic fulls daily to get some stats over time, then will try doing an instant on from S3 - if it’s going to break that will do it :slight_smile:

Looking good though :+1:

4 Likes

Fantastic, thank you so much for testing with Veeam. Do you know what block size you are using? Ideally use the largest size they offer (Local target (large blocks) 4096KB).

AWS CLI with gateway.tardigradeshare.io

I upload 47.91GB and after that i download back 6.4 GiB = 6.87 GB

upload and download was fast without problems :+1:

but on dashboard i see this
image
I dont download 9.21GB I download 6.4 GiB = 6.87 GB

Is there “problem” with calculation downloads/ Bandwidht ?

1 Like

Job was set to “Local Target” - will change it to the “large blocks” and see if that changes anything - although I de-dupe the jobs locally before uploading to S3 storage to save $$ in space usage, and the default local target seems to be sweet spot on CPU load of the Veeam boxes - I could live with spiky upload, if backup job is 50% smaller.

#edit - so increasing block size made throughput lower - that’s probably my end, I’ve never used 4096KB blocks - I’ve gone back to 1024 KB, as that’s what I use for S3.

1 Like

Hello,
I would like to see support for Synology Hyper Backup as it is S3 compatible.

If you want to test it, I can give you access to my Synology NAS.

Otherwise, it’s easy to do as they have a DSM online demo.

Go to http://sy.to/demositebeta and click on “Try DSM 6.2”.

Open Hyper Backup:

image

Select “S3-Speicher”

image

Select “Benutzerdefinierte Server-URL”

image

Follow the instructions from BETA - Using Gateway MT with the AWS CLI - Tardigrade

Enter information in the fields “Server Address”, “Access Key”, “Secret Key”.

Then click on “Bucket-Name”. If you see your bucket names, it worked. However, it does not work for me.

Thanks for your time.

1 Like

Please, specify Server-Adresse as gateway.tardigradeshare.io, Signaturversion v4, your Access Key and Secret Key, then try to select a bucket name.

3 Likes

Synology Hyper Backup Feedback:

Synology DS918+ (DSM 6.2.3-25426 Update 3)
Multipart Upload Size: 512MB

I tested a folder with large files (images) and a folder with many small files.

Large files (4 files, 8.66GB):
Uploaded: Capped my upload speed at 40Mbit/s.
Downloaded some files: Capped my download speed at 250Mbit/s.
Integration tests were done and worked fine. So no data loss or corruption.
Delete versioning: This took far too long. Deleting 4 files (in chunks) took 18 min 45 sec. I have tested this 2 times with similar results.

Small files (8241 files, 218.1MB).
Uploaded: Capped my upload speed at 40Mbit/s.
Downloaded all files: Worked fine again.
Integration checks were done and worked fine. So no data loss or corruption.
Deletion of versioning: 1 minute 40 secs.

Bandwidth in the Dashboard:

I only have 1 bucket.

I left the Dashboard for about 15 minutes and did nothing.

The bandwidth of the project and bucket is not the same:

Wish list:

  • Delete buckets in dashboard.

  • File explorer in dashboard.

1 Like

Currently uploading large dataset (around 4TV) with FileZilla Pro.
Large files, 30GB or larger.
Uploads are consistently saturating my upstream bandwidth and seem to be progressing well.
Will report back in a few days when I start the downloads but first impressions are very good with none of the bursty uploads I was seeing with the native Tardigrade integration.

1 Like

But you can take a look on Vortex:

What about this problem ?
Do you reproduce it ?

This is working as designed.

Bandwidth Fee

Download bandwidth, also referred to as egress bandwidth, is priced per GB in increments of bytes downloaded. The calculation of download bandwidth price per byte is derived from the GB download bandwidth divided by the base 10 conversion of GB to bytes. The calculated number of bytes is then multiplied by the byte download bandwidth per byte price.

When an object is downloaded, there are a number of factors that can impact the actual amount of bandwidth used. The download process includes requests for pieces from more than the minimum number of storage nodes required. While only 29 pieces out of 80 are required to reconstitute an object, in order to avoid potential long-tail performance lag from a single storage node, an Uplink will try to retrieve an object from 39 storage nodes. The Uplink will terminate all incomplete downloads in process once 29 pieces are successfully downloaded and the object can be re-encoded. In addition, if a user terminates a download before completion, the amount of data that is transferred might exceed the amount of data that the customer’s application receives. This discrepancy can occur because a transfer termination request cannot be executed instantaneously, and some amount of data might be in transit pending execution of the termination request. This data that was transferred is billed as data download bandwidth.

Example

A user downloads one 1 TB file. Based on the long tail elimination, up to 1.3 TB of download bandwidth may be used. The 1.3 TB of download bandwidth is accounted for as 1,300,000,000 bytes. The price per GB is $0.045. The price per byte is $0.000000045. The total amount charged for the egress is $58.50.

https://documentation.tardigrade.io/pricing/billing-and-payment

3 Likes

Thanks for you reply, now its ok :slightly_smiling_face::+1:

Were you using the hosted gateway or the native integration? Appreciate you testing the beta.

Hello @Dominick,
I used the hosted gateway.

If you want to replicate the results, feel free to dm me.

1 Like