A service, professional SNOs could provide to customers who wants to transfer huge amounts of data onto Tardigrade

What are these repair costs? Is there a way to quantify them? I understand normal repair costs as such that pieces have to be downloaded from SNOs which get paid for that.
But that is not the case here. The upload agent receives has all pieces locally. Uploading to Tardigrade should not come with costs.
Only reconstructions which is probably CPU resources plus disk space coming to my mind.
Am I overlooking something?

Unfortunately the upload to the network is not free for most of cloud computing.

If you mean that service provider should act as a repair worker then it’s not desirable because the worker requires a direct access to the satellite’s database and thus must be trusted to the Tardigrade satellite (and have keys).
The other way is to act as an uplink on behalf of the customer, in such a case the service provider must have a customer’s keys (or access grant).
Both ways are not good and requires trust. And will be much complicated with all documents related to data security. It’s better to avoid any trust requirements between parties.

The simplest and workable solution would be to upload 35 pieces to the network and let workers doing their job for repair costs. Or upload all required 80 and do not pay this fee.

1 Like

I think that’s a very good sum up. Easy to understand, straight forward (even cost-wise).
I think it’d be a nice thing too :slight_smile:

1 Like

The only one problem is remained - the path to the bucket and the account (API key). The pieces itself do not contain such an info.
So we can’t avoid of giving an access grant to the agent to upload those pieces to the bucket. But in case of pre-encrypted pieces we can give only a temporary write access without exposing even derived encryption key.

Here is a nice AWS case study for such a service (which seems to be required to get big data into the cloud):

As the DAM system was being implemented, the Rock Hall was undertaking a project to modernize its aging LTO storage and offsite backup for the preservation of large digital files, by moving the files to the cloud. Many of the LTO tapes were not easily accessible due to hardware and software failures and onsite storage limitations. Parnin says, “As the tech landscape progressed from the non-digital to digital, our archival storage system had become unmanageable and unsustainable.”

Using Amazon S3 and S3 Glacier Deep Archive provided the Rock Hall with the confidence that its digital media would be preserved and easily accessible at an affordable price. However, the Rock Hall still needed to recover the data on the LTO tapes. Working with AWS, the project team ingested the files into S3 Glacier Deep Archive via six AWS Snowball Edge Storage Optimized devices. Using AWS Snowball Edge helped to address common challenges with large-scale data transfers, including high network costs, long transfer times, and security concerns.

The Rock Hall worked with Tape Ark and its strategic partner Seagate Technology (Seagate Powered by Tape Ark) to extract all the data from the LTO tapes and load it onto the AWS Snowball Edge devices. Tape Ark then sent the Snowball devices loaded with data back to AWS, and the data went right into the Rock Hall’s Amazon S3 bucket. Once the Rock Hall’s digital media was in Amazon S3 on the AWS Cloud, Amazon S3 lifecycle policies were set up to automatically move the media files into S3 Glacier Deep Archive to help optimize storage costs for files that were rarely accessed. For the Rock Hall, the process was effortless, with Tape Ark managing the end-to-end migration.

@Alexey: I don’t know if the new Gateway MT could help with that. But if you read the above case study I believe a solution is really needed to get big data on board. I think nobody with a sane mind would even try to upload 300 TB x 2,7 = 810 TB over normal internet connection. Even with a SDSL fibre connection with 1 Gbit upload speed, which is among the fastest what you can get here it would take 75 days.
From my view there needs to be a way to prepare everything locally on disks, send them to a Storj Labs approved data center and they upload it within a couple of days or even hours.

Yes, the Gateway MT may help here but only for service provider - since their bandwidth would be utilized only 1x instead of 2.7x
All other will remain the same. The only problem there is an unencrypted data.
In case of Gateway MT it’s also can be only server-encrypted at the moment, the client-side encryption for Gateway MT is not implemented yet.

Just noted that in the meantime Wasabi also offers a hardware solution to help customers to transfer large amounts of data onto their storage:

1 Like

Disappointed that their ball isn’t in the shape of a ball.

2 Likes

Balls are overrated.

1 Like

It seems that for some reasons such devices must be named ‘*ball’:

https://www.backblaze.com/b2/solutions/datatransfer/fireball.html

1 Like