How to speed up upload/download file time?

Hello,
I am developing an Android app using storj sdk where people can upload and download images.
I followed the documentation but the upload part takes always at least around 1.7/1.8 seconds per image and that is a lot of time for a responsive mobile application.

Does anyone have idea of how to speed up this process? Here is the code I have tried, all with same results.

  1. Solution present at GitHub - storj/uplink-android: Storj network Android library
ByteArrayOutputStream baos = new ByteArrayOutputStream();
                    bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
                    final byte[] data = baos.toByteArray();
                    uplink = new Uplink(UplinkOption.tempDir(getCacheDir().getPath()));
                    long startTime =   System.nanoTime();
                    try (Project project = uplink.openProject(access);
                         ObjectOutputStream out = project.uploadObject("demo-bucket", user.getUid()+"/"+nomeFile);
                         InputStream in = new ByteArrayInputStream(data)) {
                        byte[] buffer = new byte[8 * 1024];
                        int bytesRead;
                        while ((bytesRead = in.read(buffer)) != -1) {
                            out.write(buffer, 0, bytesRead);
                        }
                        out.commit();
                        long endtime = System.nanoTime();
                        Log.e("Measure", "------TASK took : " +  ((endtime-startTime)/1000000)+ "mS\n");
                        Intent intent = new Intent(UploadActivity.this, MainActivity.class);
                        startActivity(intent);
                    } catch (IOException e) {
                        throw new RuntimeException(e);
                    }
  1. Write method all in once
ByteArrayOutputStream baos = new ByteArrayOutputStream();
                    bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
                    final byte[] data = baos.toByteArray();
                    UploadTask uploadTask = new UploadTask(access, "demo-bucket",user.getUid()+"/"+nomeFile, getCacheDir().getPath(), getApplicationContext(), data);
                    uploadTask.execute();
                    Intent intent = new Intent(UploadActivity.this, MainActivity.class);
                    startActivity(intent);
  1. Using an async task like suggested at GitHub - storj/uplink-android: Storj network Android library
protected Exception doInBackground(Void... params) {
        Uplink uplink = new Uplink(UplinkOption.tempDir(mTempDir));
        try (Project project = uplink.openProject(mAccess);
             InputStream in = new ByteArrayInputStream(data);
ObjectOutputStream out = project.uploadObject(mBucket, mObjectKey)) {
            byte[] buffer = new byte[128 * 1024];
            int len;
            while ((len = in.read(buffer)) != -1) {
                if (isCancelled()) {
                    // exiting the try-with-resource block without commit aborts the upload process
                    return null;
                }
                out.write(buffer, 0, len);
                if (isCancelled()) {
                    // exiting the try-with-resource block without commit aborts the upload process
                    return null;
                }
                publishProgress((long) len);
            }
            out.commit();
        } catch (StorjException | IOException e) {
            // exiting the try-with-resource block without commit aborts the upload process
            return e;
        }

        return null;
    }

None of the above tries uploaded the images in acceptable time and I’m not talking about many megas, all of them weight between 0.5 and 1 mega.

Referring to the download time, with the following code it takes around 1 second to download each image (same sizes as above) and I would be happy to decrease this too.

                    try (Project project = uplink.openProject(access);
                         InputStream in = project.downloadObject("demo-bucket", user.getUid() + "/" + link);) {
                        ImageView imageView = ((ImageView) v.findViewById(R.id.singleImage));
                        imageView.setImageBitmap(BitmapFactory.decodeStream(in));
                        ((LinearLayoutCompat) activity.findViewById(R.id.allImagesLinearLayout)).addView(v);
                        long endtime = System.nanoTime();
                        Log.e("Measure", "------TASK took : " +  ((endtime-startTime)/1000000)+ "mS\n");
                    }

Any type of Suggestions?
Thanks

Uploads are happening to multiple nodes using 110 connections (then cancel all remained when the first 80 uploads are finished) for each segment (64MB or less) and uploaded data has an expansion factor due to erasure codes and encryption (~x2.76) and long tail cancelation (110 pieces instead of 80 ~x1.38 at unhappy case), so you need to have an upstream bandwidth for that. If your upstream is not maxed out, you may increase the number of simultaneous transfers (files). Increasing parallelism (how many chunks of the same file are transferring) does not makes much of sense for files less than 64MB.

The other solution could be to use a Storj-hosted S3-compatible Gateway instead - it uses one connection for one transfer and there is no expansion factor involved, but it uses a server-side encryption.

But isn’t it a compatible way to migrate files from Amazon S3 cloud to Storj? Like this I would have to upload files in 2 different clouds?

By the way in the doc I only see it used in the web interface, in cli and in a go repository.
I would like to adopt a solution good for Android, Java, Node js and Swift.

Yes, you can migrate it, using https://www.cloudflyer.io/ (for big amount of data) or by simple rclone: configure two remotes, one for your current storage provider, the second for Storj DCS (native or s3) and transfer your data as simple as

rclone sync -P aws: storj:

but it will use your up-/down- stream bandwidth to migrate your data.

Ok, but am I missing something or are these operations that I can make only from cli? And not dinamically from my android app with the storj sdk linked.

In addiction it seems like I would have to pay storage and bandwith for both storage accounts.

Our developers are working on some speed improvements when it comes to fetching the first segment, which in your case sounds like the majority of your files. They are currently in testing/dev with those changes so I don’t have an ETA on it. But it’s actively being worked on right now.

2 Likes

Yes it’s my case, thank you, I hope to hear from this improvement soon.

I guess you thinks that our S3 compatible gateway is using Amazon? Then answer is No. It still uses Storj DCS, just S3 protocol instead of native and all encryption is happening on S3 gateway, then pieces are distributed across the network as usual.

To use an S3 protocol you may generate S3 credentials in the satellite UI or with the uplink CLI or with bindings and use them to configure the S3 access in your application using AWS S3 SDK or any other S3-compatible SDK, just need to provide a generated Access Key, Secret Key and Endpoint. The Endpoint will tell the library, that it should contact our S3 gateway, not Amazon’s one.

1 Like

That’s more clear now, thanks. In these days I have tried more ways, but for example looking at How to upload files from an Android app to aws s3 - Blog how can I generate the sessionToken in order to build the S3 credentials? I saw it’s used in many libraries so I think it’s used in all sdk.

And about the region (building S3BucketData), which should I select? The closest to me?

This is what I tried, but it doesn’t work. Both Logs are never printed.
(I am using accesskey and secret key generated with uplink share --public --register sj://my-bucket/

AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
        AmazonS3Client s3 = new AmazonS3Client(credentials);
        s3.setRegion(Region.getRegion(Regions.EU_CENTRAL_1));
        TransferUtility transferUtility = TransferUtility.builder().context(activity).s3Client(s3).build();
        TransferObserver transferObserver = transferUtility.upload(myBucket, myObjectKey, tempFile, CannedAccessControlList.PublicRead);
        transferObserver.setTransferListener(new TransferListener() {
            @Override
            public void onStateChanged(int id, TransferState state) {
                Log.e("UPP","ENTERS");
if(state == TransferState.COMPLETED){
                    Log.e( "UPP","CARICATO CON S3");
....

While the file is generating from the data byte array containing the image:

File tempFile = null;
        try {
            tempFile = File.createTempFile("temp", null);
            tempFile.deleteOnExit();
            FileOutputStream out = null;
            out = new FileOutputStream(tempFile);
            IOUtils.copy(new ByteArrayInputStream(data), out);
        } catch (IOException e) {
            throw new RuntimeException(e);
        }

The region is not used in Storj DCS, our Storj-hosted S3 Compatible Gateway is available globally, you can provide anything in it. However, you must set an endpoint to https://gateway.storjshare.io, otherwise it will try to contact Amazon instead of Storj DCS.
I do not see in your code that you provided an endpoint anywhere, so it will not work.
You need to set the endpoint explicitly.

2 Likes

Thank you, downloading with direct url and uploading with s3 sped up the process.
Actually it was surely the endpoint that was missing and, if someone will need it, I think it also depended by declaring the right services in the application manifest.
In addiction I change the method in s3.putObject(…) using an inputstream instead of the file and it worked.

Thank you again

1 Like