I just ran an rclone test on my small 512M Vultr VM. It uploaded 10x64M files to S3 in 7.36 seconds
[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir s3:hashbackup-us-east-2/rcdir --no-traverse --s3-max-upload-parts 1
Command being timed: "rclone copy rcdir s3:hashbackup-us-east-2/rcdir --no-traverse --s3-max-upload-parts 1"
User time (seconds): 3.90
System time (seconds): 2.49
Percent of CPU this job got: 86%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.36
Maximum resident set size (kbytes): 95720
Minor (reclaiming a frame) page faults: 25401
Voluntary context switches: 1469
Involuntary context switches: 530
File system inputs: 2302808
Page size (bytes): 4096
Exit status: 0
[root@hbtest ~]# ls -l rcdir
total 625036
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f1
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f10
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f2
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f3
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f4
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f5
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f6
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f7
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f8
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f9
[root@hbtest ~]# rclone lsl s3:hashbackup-us-east-2/rcdir
64000000 2021-10-12 20:48:46.442223548 f1
64000000 2021-10-12 20:49:08.357387339 f10
64000000 2021-10-12 20:48:55.334290008 f2
64000000 2021-10-12 20:48:56.662299933 f3
64000000 2021-10-12 20:48:57.800308439 f4
64000000 2021-10-12 20:48:58.836316182 f5
64000000 2021-10-12 20:49:00.015324992 f6
64000000 2021-10-12 20:49:01.120333252 f7
64000000 2021-10-12 20:49:02.342342386 f8
64000000 2021-10-12 20:49:04.109355592 f9
I deleted that copy from s3 and tried rclone with no options. It still ran in 7 seconds:
[root@hbtest ~]# rclone purge s3:hashbackup-us-east-2/rcdir
[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir s3:hashbackup-us-east-2/rcdir
Command being timed: "rclone copy rcdir s3:hashbackup-us-east-2/rcdir"
User time (seconds): 3.64
System time (seconds): 2.42
Percent of CPU this job got: 86%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.04
Maximum resident set size (kbytes): 95396
Minor (reclaiming a frame) page faults: 25103
Voluntary context switches: 1242
Involuntary context switches: 513
File system inputs: 2494368
File system outputs: 0
Page size (bytes): 4096
Exit status: 0
Not sure why you’re seeing 20 seconds. Do you have a lot of stuff in your S3 bucket? rclone likes to list remote contents when it starts up because it is normally designed for syncing directories. If you have a bunch of stuff on your remote, it might be taking a while for rclone to list it before it does the transfers.
I tried to copy the same rcdir to Storj using the native uplink (Tardigrade) rclone remote. There was nothing in my Storj bucket before the copy. It ran 5m24s before being killed, I guess because this VM doesn’t have much RAM and only 1 CPU, so trying to erasure code on it is difficult:
[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir sj:hbtest
Command terminated by signal 9
Command being timed: "rclone copy rcdir sj:hbtest"
User time (seconds): 8.91
System time (seconds): 57.22
Percent of CPU this job got: 20%
Elapsed (wall clock) time (h:mm:ss or m:ss): 5:24.44
Maximum resident set size (kbytes): 326004
Major (requiring I/O) page faults: 43381
Minor (reclaiming a frame) page faults: 176513
Voluntary context switches: 101565
Involuntary context switches: 1041
File system inputs: 223476872
File system outputs: 117560
Page size (bytes): 4096
Exit status: 0
Then I did the same thing with the Storj MT S3 Gateway. The copy took 23s:
[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir sjs3:hbtest/rcdir
Command being timed: "rclone copy rcdir sjs3:hbtest/rcdir"
User time (seconds): 4.76
System time (seconds): 2.60
Percent of CPU this job got: 31%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:23.11
Maximum resident set size (kbytes): 100288
Minor (reclaiming a frame) page faults: 25795
Voluntary context switches: 13814
Involuntary context switches: 309
File system inputs: 2472952
Page size (bytes): 4096
Exit status: 0
[root@hbtest ~]# rclone lsl sjs3:hbtest/rcdir
64000000 2021-10-12 20:49:08.357387339 f10
64000000 2021-10-12 20:49:01.120333252 f7
64000000 2021-10-12 20:49:04.109355592 f9
64000000 2021-10-12 20:48:58.836316182 f5
64000000 2021-10-12 20:48:57.800308439 f4
64000000 2021-10-12 20:49:02.342342386 f8
64000000 2021-10-12 20:49:00.015324992 f6
64000000 2021-10-12 20:48:56.662299933 f3
64000000 2021-10-12 20:48:46.442223548 f1
64000000 2021-10-12 20:48:55.334290008 f2
[root@hbtest ~]#
I re-ran the S3 tests with -P since I noticed you used that option:
[root@hbtest ~]# rclone purge s3:hashbackup-us-east-2/rcdir
[root@hbtest ~]# /usr/bin/time -v rclone -P copy rcdir s3:hashbackup-us-east-2/rcdir
Transferred: 610.352Mi / 610.352 MiByte, 100%, 85.382 MiByte/s, ETA 0s
Transferred: 10 / 10, 100%
Elapsed time: 7.7s
Command being timed: "rclone -P copy rcdir s3:hashbackup-us-east-2/rcdir"
User time (seconds): 3.69
System time (seconds): 2.37
Percent of CPU this job got: 77%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.81
Maximum resident set size (kbytes): 95796
Minor (reclaiming a frame) page faults: 25377
Voluntary context switches: 1547
Involuntary context switches: 469
File system inputs: 2489792
Page size (bytes): 4096
Exit status: 0