Cant upload large files

Hi I am quite new to storj but not to s3 so I am using s3-client a python library to upload a large file > 3Gb that I could not upload via Uplink (the upload start fast and decrease till stop completely)

this is the cmd I have run:

s3-client -r europe-west-1 -e upload realestate -f ‘/home/aureliano/eclipse-workspace/real_estate/data/DVFPlus_4-0_SQL_LAMB93_R084-ED201/1_DONNEES_LIVRAISON/dvf_departements.sql’

Uploading file /home/aureliano/eclipse-workspace/real_estate/data/DVFPlus_4-0_SQL_LAMB93_R084-ED201/1_DONNEES_LIVRAISON/dvf_departements.sql with object name /home/aureliano/eclipse-workspace/real_estate/data/DVFPlus_4-0_SQL_LAMB93_R084-ED201/1_DONNEES_LIVRAISON/dvf_departements.sql
data transferred:   0%|                                                                                          | 0.00/3.79G [00:00<?, ?B/s]
Traceback (most recent call last):
  File "/home/aureliano/.local/lib/python3.8/site-packages/boto3/s3/", line 279, in upload_file
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3transfer/", line 106, in result
    return self._coordinator.result()
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3transfer/", line 265, in result
    raise self._exception
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3transfer/", line 126, in __call__
    return self._execute_main(kwargs)
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3transfer/", line 150, in _execute_main
    return_value = self._main(**kwargs)
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3transfer/", line 332, in _main
    response = client.create_multipart_upload(
  File "/home/aureliano/.local/lib/python3.8/site-packages/botocore/", line 357, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/home/aureliano/.local/lib/python3.8/site-packages/botocore/", line 676, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (301) when calling the CreateMultipartUpload operation: Moved Permanently

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/aureliano/.local/bin/s3-client", line 8, in <module>
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3_client/", line 878, in main
    args.func(s3, args)
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3_client/", line 804, in cmd_upload
    upload_single_file(s3, args.bucket, args.filename, args.nokeepdir)
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3_client/", line 783, in upload_single_file
    s3.upload_file(bucket_name, file_name, key_name)
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3_client/", line 310, in wrapped_f
    result = func(*args, **kwargs)
  File "/home/aureliano/.local/lib/python3.8/site-packages/s3_client/", line 539, in upload_file
  File "/home/aureliano/.local/lib/python3.8/site-packages/boto3/s3/", line 207, in bucket_upload_file
    return self.meta.client.upload_file(
  File "/home/aureliano/.local/lib/python3.8/site-packages/boto3/s3/", line 129, in upload_file
    return transfer.upload_file(
  File "/home/aureliano/.local/lib/python3.8/site-packages/boto3/s3/", line 285, in upload_file
    raise S3UploadFailedError(
boto3.exceptions.S3UploadFailedError: Failed to upload /home/aureliano/eclipse-workspace/real_estate/data/DVFPlus_4-0_SQL_LAMB93_R084-ED201/1_DONNEES_LIVRAISON/dvf_departements.sql to realestate//home/aureliano/eclipse-workspace/real_estate/data/DVFPlus_4-0_SQL_LAMB93_R084-ED201/1_DONNEES_LIVRAISON/dvf_departements.sql: An error occurred (301) when calling the CreateMultipartUpload operation: Moved Permanently

I am quite lost in the discussions is this topic been fixed in the beta release?

Are there suggestion about best clients with verified history of successfully upload?

Thanks in advance for the support
Devops at

Below is a simple lab I ran using the aws cli. I also love using rclone.

# setup aws cli
aws configure
# entered access key and secret key and ledt region and output default
# display buckets
aws s3 ls --endpoint-url=
# Output
# Returns nothing but seems to work
# Making a bucket
aws s3 mb s3://testbucket --endpoint-url=
# Output (It works!)
make_bucket: testbucket
# Copy test 2gb archive to bucket
aws s3 cp /Users/dominickmarino/Desktop/ s3://testbucket --endpoint-url=
# Observation during (completed 99.8 MiB/1.9 GiB (1.1 MiB/s) with 1 file(s) remaining) (around 20Mbps)
# Output
upload: Desktop/ to s3://testbucket/
# List the file we uploaded
aws s3 ls s3://testbucket --endpoint-url=
# Output
2021-01-22 11:18:51 1996783599
# Copy file back
aws s3 cp s3://testbucket/ /Users/dominickmarino/Desktop/ --endpoint-url=

Rclone Demo
# setup rclone
rclone config
# select n (New Remote)
# name
# select 4 (4 / Amazon S3 Compliant Storage Provider)
# select 13 (13 / Any other S3 compatible provider)
# select 1 (1 / Enter AWS credentials in the next step \ “false”)
# enter access key
# enter secret key
# select 1 ( 1 / Use this if unsure. Will use v4 signatures and an empty region.\ “”)
# enter endpoint
# use default location_constraint
# use default ACL
# edit advanced config
# review config and select default
# quit config
# make bucket and path
rclone mkdir s3rctest:testpathforvideo
# list bucket:path
rclone lsf s3rctest:
# copy video over
rclone copy --progress /Users/dominickmarino/Desktop/Screen\ Recording\ 2021-03-12\ at\ 10.48.10\ s3rctest:testpathforvideo/videos
# list file uploaded
rclone ls s3rctest:testpathforvideo
# output (40998657 videos/Screen Recording 2021-03-12 at 10.48.10

Hi Dominic,
thanks the support indeed using awscli solve the problem. I’ve set up a tutorial who make use of storj as storage source for info