Performance Testing

I’ve been doing some reading and ran across the Electric Cars page. It says:

We are really proud of our 99.99999999% durability, 99.95% availability, S3 compatibility, and upload/download speeds that are on par (or better) than the big three cloud providers.

Are there actual performance tests that confirm Storj’s upload & download speeds are on par or better than S3, Azure, and Google Cloud Storage?

Here are results from a recent HashBackup test.

I also ran a few uploads and downloads of a 10MB file with uplink, to try to repeat the performance numbers in this post, Measuring Production Readiness. For uploads, today I saw 6.96s, 3.7s, 7.19s, 5.72s, 6.92s uploading 10MB. According to the above post, I should have only seen 2-3 uploads slower than 2.31s. For download I saw 3.43s, 2.92s, 3.62s, 3.12s, 2.23s, but should have only seen 2-3 slower than 1.73s. This test and the above HashBackup tests are on a VPS with 20ms ping times to us1.storj.io

The performance numbers aren’t terrible, but I have read several places in Storj literature where the performance is “on par or better” with the top storage services. Can anyone demonstrate that?

2 Likes

Have you checked the Hotrodding Decentralized Storage?
Can you implement a parallelism in HashBackup too?

I just re-ran the uplink tests from a 4-cpu Vultr VM in Chicago. Maybe it’s only 2 cores with Hyperthreading.

uplink upload results with 10MB file: 4.19s, 4.32s, 4.39s, 8.66s, 5.62s, 8.05s, 3.83s, 4.08s, 4.34s, 6.2s.

uplink download results with 10MB file: 3.39s, 1.71s, 1.70s, 2.17s, 1.98s, 1.82s, 2.16s, 2.45s, 1.9s, 1.57s

The download test results are close to times in the “Readiness” post but the upload times seem pretty far off. The median was 1.94 in that post.

I’ll post some HashBackup tests results here later.

HashBackup implements parallelism for all destination types including Storj native (uplink) and Storj S3 MT Gateway.

Here is a comparison of Storj S3, Storj native uplink, S3, GCS and B2 for backup, restores, and clear (deletes) with 1 destination worker and with 4, backing up and restoring /usr on CentOS 7. It’s backing up ~1.5GB of data in ~34K files and creates ~630MB of backup files - about 10 of 64MB each. I didn’t use multipart transfers because the files transferred are right at 64M, GCS & B2 don’t support it (HashBackup limitation), and multipart is slower on S3 for this file size.

Here’s the test script. Run it with the destination name as argument:

hb dest -c hb setid $1 --force; hb clear -c hb --force; hb dest -c hb setid $1 --force # for shell destination
hb config -c hb cache-size-limit -1; /usr/bin/time -v hb backup -c hb -v1 /usr
hb config -c hb cache-size-limit 0; rm -rf hb/arc.* usr;/usr/bin/time -v hb get -c hb -v1 /usr --no-local
/usr/bin/time -v hb clear -c hb --force

Test results:

╔═══════════════════════╦════════════╦═══════════════╗
║          Job          ║   Seconds  ║     MB Ram    ║
╠═══════════════════════╩════════════╩═══════════════╣
║     One thread, 34K files/1.5G, 10x63MB backup     ║
╠═══════════════════════╦════════════╦═══════════════╣
║ Backup, SJ Uplink, 1  ║ 54, 55, 52 ║ 190, 203, 195 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, SJ S3 MT, 1   ║ 55, 49, 49 ║ 109, 105, 108 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, S3, 1         ║ 28, 28, 29 ║ 114, 106, 111 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, GS, 1         ║ 29, 29, 28 ║ 107, 108, 115 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, B2, 1         ║ 32, 41, 40 ║ 115, 115, 114 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, SJ Uplink, 1 ║ 50, 53, 50 ║ 119, 120, 120 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, SJ S3 MT, 1  ║ 48, 43, 44 ║ 122, 119, 121 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, S3, 1        ║ 42, 41, 41 ║ 123, 120, 122 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, GS, 1        ║ 42, 43, 40 ║ 120, 118, 120 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, B2, 1        ║ 40, 41, 40 ║ 129, 128, 124 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, SJ Uplink, 1   ║ 28, 48, 30 ║ 39, 39, 39    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, SJ S3 MT, 1    ║ 36, 30, 49 ║ 44, 42, 42    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, S3, 1          ║ 2, 1, 2    ║ 42, 42, 42    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, GS, 1          ║ 3, 4, 3    ║ 42, 42, 42    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, B2, 1          ║ 4, 3, 3    ║ 46, 46, 46    ║
╠═══════════════════════╩════════════╩═══════════════╣
║             Same backup with 4 threads             ║
╠═══════════════════════╦════════════╦═══════════════╣
║ Backup, SJ Uplink, 4  ║ 42, 44, 43 ║ 198, 199, 209 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, SJ S3 MT, 4   ║ 39, 36, 36 ║ 107, 113, 114 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, S3, 4         ║ 28, 29, 30 ║ 115, 113, 114 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, GS, 4         ║ 30, 29, 29 ║ 109, 110, 114 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, B2, 4         ║ 32, 33, 33 ║ 119, 123, 125 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, SJ Uplink, 4 ║ 47, 45, 44 ║ 117, 121, 118 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, SJ S3 MT, 4  ║ 43, 43, 44 ║ 126, 127, 122 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, S3, 4        ║ 41, 43, 42 ║ 126, 127, 127 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, GS, 4        ║ 42, 43, 42 ║ 124, 128, 127 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, B2, 4        ║ 49, 44, 47 ║ 137, 138, 136 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, SJ Uplink, 4   ║ 11, 12, 10 ║ 40, 40, 42    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, SJ S3 MT, 4    ║ 9, 14, 13  ║ 45, 47, 47    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, S3, 4          ║ 1, 2, 2    ║ 45, 45, 45    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, GS, 4          ║ 2, 2, 1    ║ 45, 45, 45    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Clear, B2, 4          ║ 6, 2, 4    ║ 50, 52, 49    ║
╠═══════════════════════╩════════════╩═══════════════╣
║    4 threads, single 630MB file, 10x63MB backup    ║
╠═══════════════════════╦════════════╦═══════════════╣
║ Backup, SJ Uplink, 4  ║ 28, 25, 28 ║ 208, 220, 213 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, SJ S3 MT, 4   ║ 24, 21, 22 ║ 153, 155, 152 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, S3, 4         ║ 15, 10, 10 ║ 151, 152, 156 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, GS, 4         ║ 9, 10, 10  ║ 156, 155, 153 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Backup, B2, 4         ║ 15, 14, 14 ║ 162, 161, 159 ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, SJ Uplink, 4 ║ 28, 31, 30 ║ 74, 75, 74    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, SJ S3 MT, 4  ║ 17, 18, 15 ║ 63, 63, 61    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, S3, 4        ║ 11, 13, 14 ║ 61, 61, 63    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, GS, 4        ║ 10, 10, 10 ║ 64, 62, 63    ║
╠═══════════════════════╬════════════╬═══════════════╣
║ Restore, B2, 4        ║ 10, 13, 11 ║ 70, 71, 72    ║
╚═══════════════════════╩════════════╩═══════════════╝
Created with https://www.tablesgenerator.com

The restore test of 34K files is limited by HashBackup rather than network I/O, because it has to issue many file system calls for each file to set permissions, attributes, timestamps, etc. The restore test of a single large file shows network differences better.

4 Likes

Thanks to share the results @hashbackup. It’s always good to have more data about performance, especially when multiple options are compared.

I tried to check what hashbackup does, but as far as I see it’s not open source, therefore I couldn’t check what does it do exactly.

For this reason I switched back to rclone and executed similar test (10 * 64Mb upload, 4 threads)

Here are my results:

+-------------------------------------------------------+------------+
| Job                                                   | seconds    |
+-------------------------------------------------------+------------+
| Backup, rclone, native backend (4 thread)             | 31, 31, 32 |
+-------------------------------------------------------+------------+
| Backup, rclone, s3 backend (4 * 4 thread)             | 21,19,18   |
+-------------------------------------------------------+------------+
| Backup, uplink, xargs 4 thread                        | 32, 31, 34 |
+-------------------------------------------------------+------------+
| Backup uplink, xargs 1 thread + 4 parallelism         | 50, 48     |
+-------------------------------------------------------+------------+
| Backup, 4clone, s3 backend (4 * 1 thread, 64Mb chunk) | 22, 20, 20 |
+-------------------------------------------------------+------------+

These numbers are very similar to your GS/B2/S3 results (~30 seconds). S3 is significant faster, but I assume it’s because of the in-memory buffer / pre-read of rclone, therefore S3/B2/S3 can also be faster with rclone (didn’t check).

BTW I used eu1 gateway which runs on newer hardware than us1 (AFAIK)

2 Likes

I think to have a similar comparison, you would need to run the same rclone tests with Amazon S3, GCS, and B2.

HashBackup is doing a lot of work that rclone is not doing, like reading 35K files in /usr, compressing them, deduplicating data, computing SHA1 hashes for every data block and every file, etc. That’s what takes 30s.

The last set of test results, where a single 630MB file is split into 10 files, is more comparable to your rclone test. I added that test because I could see that HashBackup was the limiting factor for S3 and GS.

1 Like

Good point. It would be better to have comparison without any significant overhead on the cliet side.

I just executed the same test (with default chunk size and threads which is 4 * 4) agains AWS s3.

Got the results: 20.1s / 19.7s / 18.5s

Seems to be the same what I got with the storj + s3 backend.

time rclone -P copy storj s3:zxstorjtest/test4
time rclone -P copy storj s3:zxstorjtest/test2
2021/10/12 18:41:21 NOTICE: S3 bucket zxstorjtest: Switched region to "us-west-2" from "us-east-1"
Transferred:   	     640Mi / 640 MiByte, 100%, 35.350 MiByte/s, ETA 0s
Transferred:           10 / 10, 100%
Elapsed time:        20.1s

real	0m20.128s
user	0m4.704s
sys	0m1.445s

Hmm, interesting. I don’t understand how HashBackup could generate and upload 10x64MB files to S3 and GS twice as fast as rclone can copy existing files, especially considering that HashBackup is doing a bunch of backup-related work (hashing, deduping, encrypting, etc) that rclone isn’t.

I can try some rclone tests later on the same VM setup I was using before. The only thing I can think of is that I wasn’t using multipart, for reasons I mentioned in the post.

I just ran an rclone test on my small 512M Vultr VM. It uploaded 10x64M files to S3 in 7.36 seconds

[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir s3:hashbackup-us-east-2/rcdir --no-traverse --s3-max-upload-parts 1
	Command being timed: "rclone copy rcdir s3:hashbackup-us-east-2/rcdir --no-traverse --s3-max-upload-parts 1"
	User time (seconds): 3.90
	System time (seconds): 2.49
	Percent of CPU this job got: 86%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.36
	Maximum resident set size (kbytes): 95720
	Minor (reclaiming a frame) page faults: 25401
	Voluntary context switches: 1469
	Involuntary context switches: 530
	File system inputs: 2302808
	Page size (bytes): 4096
	Exit status: 0
[root@hbtest ~]# ls -l rcdir
total 625036
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f1
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f10
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f2
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f3
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f4
-rw-r--r-- 1 root root 64000000 Oct 12 20:48 f5
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f6
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f7
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f8
-rw-r--r-- 1 root root 64000000 Oct 12 20:49 f9
[root@hbtest ~]# rclone lsl s3:hashbackup-us-east-2/rcdir
 64000000 2021-10-12 20:48:46.442223548 f1
 64000000 2021-10-12 20:49:08.357387339 f10
 64000000 2021-10-12 20:48:55.334290008 f2
 64000000 2021-10-12 20:48:56.662299933 f3
 64000000 2021-10-12 20:48:57.800308439 f4
 64000000 2021-10-12 20:48:58.836316182 f5
 64000000 2021-10-12 20:49:00.015324992 f6
 64000000 2021-10-12 20:49:01.120333252 f7
 64000000 2021-10-12 20:49:02.342342386 f8
 64000000 2021-10-12 20:49:04.109355592 f9

I deleted that copy from s3 and tried rclone with no options. It still ran in 7 seconds:

[root@hbtest ~]# rclone purge s3:hashbackup-us-east-2/rcdir
[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir s3:hashbackup-us-east-2/rcdir
	Command being timed: "rclone copy rcdir s3:hashbackup-us-east-2/rcdir"
	User time (seconds): 3.64
	System time (seconds): 2.42
	Percent of CPU this job got: 86%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.04
	Maximum resident set size (kbytes): 95396
	Minor (reclaiming a frame) page faults: 25103
	Voluntary context switches: 1242
	Involuntary context switches: 513
	File system inputs: 2494368
	File system outputs: 0
	Page size (bytes): 4096
	Exit status: 0

Not sure why you’re seeing 20 seconds. Do you have a lot of stuff in your S3 bucket? rclone likes to list remote contents when it starts up because it is normally designed for syncing directories. If you have a bunch of stuff on your remote, it might be taking a while for rclone to list it before it does the transfers.

I tried to copy the same rcdir to Storj using the native uplink (Tardigrade) rclone remote. There was nothing in my Storj bucket before the copy. It ran 5m24s before being killed, I guess because this VM doesn’t have much RAM and only 1 CPU, so trying to erasure code on it is difficult:

[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir sj:hbtest
Command terminated by signal 9
	Command being timed: "rclone copy rcdir sj:hbtest"
	User time (seconds): 8.91
	System time (seconds): 57.22
	Percent of CPU this job got: 20%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 5:24.44
	Maximum resident set size (kbytes): 326004
	Major (requiring I/O) page faults: 43381
	Minor (reclaiming a frame) page faults: 176513
	Voluntary context switches: 101565
	Involuntary context switches: 1041
	File system inputs: 223476872
	File system outputs: 117560
	Page size (bytes): 4096
	Exit status: 0

Then I did the same thing with the Storj MT S3 Gateway. The copy took 23s:

[root@hbtest ~]# /usr/bin/time -v rclone copy rcdir sjs3:hbtest/rcdir
	Command being timed: "rclone copy rcdir sjs3:hbtest/rcdir"
	User time (seconds): 4.76
	System time (seconds): 2.60
	Percent of CPU this job got: 31%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:23.11
	Maximum resident set size (kbytes): 100288
	Minor (reclaiming a frame) page faults: 25795
	Voluntary context switches: 13814
	Involuntary context switches: 309
	File system inputs: 2472952
	Page size (bytes): 4096
	Exit status: 0
[root@hbtest ~]# rclone lsl sjs3:hbtest/rcdir
 64000000 2021-10-12 20:49:08.357387339 f10
 64000000 2021-10-12 20:49:01.120333252 f7
 64000000 2021-10-12 20:49:04.109355592 f9
 64000000 2021-10-12 20:48:58.836316182 f5
 64000000 2021-10-12 20:48:57.800308439 f4
 64000000 2021-10-12 20:49:02.342342386 f8
 64000000 2021-10-12 20:49:00.015324992 f6
 64000000 2021-10-12 20:48:56.662299933 f3
 64000000 2021-10-12 20:48:46.442223548 f1
 64000000 2021-10-12 20:48:55.334290008 f2
[root@hbtest ~]# 

I re-ran the S3 tests with -P since I noticed you used that option:

[root@hbtest ~]# rclone purge s3:hashbackup-us-east-2/rcdir
[root@hbtest ~]# /usr/bin/time -v rclone -P copy rcdir s3:hashbackup-us-east-2/rcdir
Transferred:   	  610.352Mi / 610.352 MiByte, 100%, 85.382 MiByte/s, ETA 0s
Transferred:           10 / 10, 100%
Elapsed time:         7.7s
	Command being timed: "rclone -P copy rcdir s3:hashbackup-us-east-2/rcdir"
	User time (seconds): 3.69
	System time (seconds): 2.37
	Percent of CPU this job got: 77%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:07.81
	Maximum resident set size (kbytes): 95796
	Minor (reclaiming a frame) page faults: 25377
	Voluntary context switches: 1547
	Involuntary context switches: 469
	File system inputs: 2489792
	Page size (bytes): 4096
	Exit status: 0

For completeness, here’s the same rclone upload to B2:

[root@hbtest ~]# /usr/bin/time -v rclone -P copy rcdir b2:hashbackup-test/rcdir
Transferred:   	  610.352Mi / 610.352 MiByte, 100%, 67.816 MiByte/s, ETA 0s
Transferred:           10 / 10, 100%
Elapsed time:        10.2s
	Command being timed: "rclone -P copy rcdir b2:hashbackup-test/rcdir"
	User time (seconds): 3.39
	System time (seconds): 1.77
	Percent of CPU this job got: 50%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:10.27
	Maximum resident set size (kbytes): 94812
	Minor (reclaiming a frame) page faults: 25489
	Voluntary context switches: 2220
	Involuntary context switches: 340
	File system inputs: 2473136
	Page size (bytes): 4096
	Exit status: 0

I couldn’t get Google Cloud Storage to work with rclone. Kept giving me 403 Forbidden errors. I was trying to use the S3 compatibility interface which works fine with HashBackup but not with rclone. Or I don’t have it set up right - very likely.

More like encryption. It requires at least 1GB and 1 vCPU.
The erasure coding is happening after.

I added a new command to HashBackup to perform upload, download, and delete testing on destinations. To get the new version do: hb upgrade -p

To run the test, configure the destinations in dest.conf and run:

$ hb dest -c backupdir test

Specific destinations can also be listed to just test them. This test uses a single thread for all testing; the point is to test individual operations, not to see if a bunch of threads mask latency. Here’s a run from a small Vultr VM with Storj’s MT S3 Gateway, Amazon S3, Backblaze B2, and Google Cloud Storage using their S3 gateway:

[root@hbtest ~]# hb dest -c hb test
HashBackup #2567 Copyright 2009-2021 HashBackup, LLC
Using destinations in dest.conf

---------- Testing sjs3 ----------

  1 KiB:
    Round 0: up: 0.4s, 2.707 KiB/s  down: 0.1s, 7.422 KiB/s  del: 0.2s, 4.178 KiB/s
    Round 1: up: 0.4s, 2.277 KiB/s  down: 0.1s, 8.154 KiB/s  del: 0.6s, 1.752 KiB/s
    Round 2: up: 0.5s, 2.222 KiB/s  down: 0.2s, 4.811 KiB/s  del: 0.3s, 3.638 KiB/s
  > Average: up: 0.4s, 2.383 KiB/s  down: 0.2s, 6.448 KiB/s  del: 0.4s, 2.765 KiB/s

  4 KiB:
    Round 0: up: 0.5s, 8.710 KiB/s  down: 0.2s, 19.200 KiB/s  del: 0.3s, 15.659 KiB/s
    Round 1: up: 0.3s, 11.786 KiB/s  down: 0.1s, 29.798 KiB/s  del: 0.6s, 6.821 KiB/s
    Round 2: up: 0.5s, 8.487 KiB/s  down: 0.2s, 22.754 KiB/s  del: 0.2s, 16.109 KiB/s
  > Average: up: 0.4s, 9.449 KiB/s  down: 0.2s, 23.149 KiB/s  del: 0.4s, 11.008 KiB/s

  16 KiB:
    Round 0: up: 1.5s, 10.887 KiB/s  down: 0.6s, 28.403 KiB/s  del: 0.7s, 22.409 KiB/s
    Round 1: up: 1.4s, 11.409 KiB/s  down: 0.6s, 28.212 KiB/s  del: 0.7s, 21.345 KiB/s
    Round 2: up: 1.3s, 12.130 KiB/s  down: 0.6s, 28.537 KiB/s  del: 0.6s, 27.078 KiB/s
  > Average: up: 1.4s, 11.453 KiB/s  down: 0.6s, 28.383 KiB/s  del: 0.7s, 23.364 KiB/s

  256 KiB:
    Round 0: up: 1.1s, 227.468 KiB/s  down: 0.6s, 454.302 KiB/s  del: 0.8s, 336.443 KiB/s
    Round 1: up: 1.2s, 212.913 KiB/s  down: 0.6s, 438.336 KiB/s  del: 0.8s, 317.257 KiB/s
    Round 2: up: 1.4s, 186.029 KiB/s  down: 0.7s, 392.445 KiB/s  del: 0.6s, 395.325 KiB/s
  > Average: up: 1.2s, 207.347 KiB/s  down: 0.6s, 426.702 KiB/s  del: 0.7s, 346.666 KiB/s

  1 MiB:
    Round 0: up: 1.5s, 675.191 KiB/s  down: 0.8s, 1.204694 MiB/s  del: 1.0s, 1.047874 MiB/s
    Round 1: up: 1.5s, 663.794 KiB/s  down: 0.8s, 1.313229 MiB/s  del: 0.8s, 1.309225 MiB/s
    Round 2: up: 1.5s, 696.350 KiB/s  down: 0.8s, 1.310164 MiB/s  del: 0.7s, 1.444659 MiB/s
  > Average: up: 1.5s, 678.179 KiB/s  down: 0.8s, 1.273977 MiB/s  del: 0.8s, 1.244643 MiB/s

  4 MiB:
    Round 0: up: 1.9s, 2.055211 MiB/s  down: 1.0s, 3.832684 MiB/s  del: 0.7s, 5.576562 MiB/s
    Round 1: up: 1.9s, 2.069060 MiB/s  down: 1.1s, 3.730988 MiB/s  del: 0.6s, 6.707637 MiB/s
    Round 2: up: 1.6s, 2.440236 MiB/s  down: 1.0s, 3.859525 MiB/s  del: 0.8s, 5.195851 MiB/s
  > Average: up: 1.8s, 2.174424 MiB/s  down: 1.1s, 3.806921 MiB/s  del: 0.7s, 5.759628 MiB/s

  16 MiB:
    Round 0: up: 2.4s, 6.802658 MiB/s  down: 1.5s, 10.322733 MiB/s  del: 0.8s, 21.170543 MiB/s
    Round 1: up: 1.9s, 8.425012 MiB/s  down: 1.5s, 10.791809 MiB/s  del: 0.5s, 29.120134 MiB/s
    Round 2: up: 2.3s, 7.043987 MiB/s  down: 1.5s, 10.391271 MiB/s  del: 0.8s, 21.086010 MiB/s
  > Average: up: 2.2s, 7.359063 MiB/s  down: 1.5s, 10.497914 MiB/s  del: 0.7s, 23.255678 MiB/s

  64 MiB:
    Round 0: up: 2.6s, 24.993283 MiB/s  down: 2.4s, 26.805958 MiB/s  del: 0.7s, 96.619087 MiB/s
    Round 1: up: 3.2s, 19.931820 MiB/s  down: 2.1s, 30.333866 MiB/s  del: 0.8s, 78.024717 MiB/s
    Round 2: up: 3.0s, 21.398371 MiB/s  down: 2.1s, 30.186987 MiB/s  del: 0.9s, 67.888598 MiB/s
  > Average: up: 2.9s, 21.911515 MiB/s  down: 2.2s, 29.013975 MiB/s  del: 0.8s, 79.163219 MiB/s

Test complete


---------- Testing s3 ----------

  1 KiB:
    Round 0: up: 0.3s, 2.895 KiB/s  down: 0.1s, 15.359 KiB/s  del: 0.0s, 20.043 KiB/s
    Round 1: up: 0.1s, 13.031 KiB/s  down: 0.0s, 28.622 KiB/s  del: 0.1s, 18.490 KiB/s
    Round 2: up: 0.1s, 15.214 KiB/s  down: 0.0s, 34.051 KiB/s  del: 0.0s, 30.308 KiB/s
  > Average: up: 0.2s, 6.149 KiB/s  down: 0.0s, 23.181 KiB/s  del: 0.0s, 21.903 KiB/s

  4 KiB:
    Round 0: up: 0.1s, 70.972 KiB/s  down: 0.1s, 78.201 KiB/s  del: 0.0s, 102.349 KiB/s
    Round 1: up: 0.1s, 33.943 KiB/s  down: 0.0s, 92.306 KiB/s  del: 0.4s, 9.442 KiB/s
    Round 2: up: 0.0s, 88.363 KiB/s  down: 0.1s, 79.247 KiB/s  del: 0.0s, 104.550 KiB/s
  > Average: up: 0.1s, 54.676 KiB/s  down: 0.0s, 82.782 KiB/s  del: 0.2s, 23.953 KiB/s

  16 KiB:
    Round 0: up: 0.1s, 285.303 KiB/s  down: 0.1s, 262.726 KiB/s  del: 0.0s, 509.961 KiB/s
    Round 1: up: 0.1s, 288.009 KiB/s  down: 0.0s, 464.846 KiB/s  del: 0.0s, 428.449 KiB/s
    Round 2: up: 0.0s, 371.005 KiB/s  down: 0.0s, 569.738 KiB/s  del: 0.0s, 475.454 KiB/s
  > Average: up: 0.1s, 310.156 KiB/s  down: 0.0s, 388.969 KiB/s  del: 0.0s, 468.883 KiB/s

  256 KiB:
    Round 0: up: 0.1s, 1.962494 MiB/s  down: 0.1s, 3.602784 MiB/s  del: 0.0s, 6.195171 MiB/s
    Round 1: up: 0.1s, 1.715585 MiB/s  down: 0.0s, 6.467821 MiB/s  del: 0.0s, 8.378018 MiB/s
    Round 2: up: 0.1s, 2.929155 MiB/s  down: 0.0s, 7.866818 MiB/s  del: 0.0s, 7.375716 MiB/s
  > Average: up: 0.1s, 2.092280 MiB/s  down: 0.0s, 5.363935 MiB/s  del: 0.0s, 7.205371 MiB/s

  1 MiB:
    Round 0: up: 0.1s, 8.883433 MiB/s  down: 0.1s, 15.963766 MiB/s  del: 0.0s, 29.872045 MiB/s
    Round 1: up: 0.1s, 9.825994 MiB/s  down: 0.0s, 23.981429 MiB/s  del: 0.0s, 26.632531 MiB/s
    Round 2: up: 0.1s, 11.896237 MiB/s  down: 0.0s, 26.248687 MiB/s  del: 0.0s, 31.429543 MiB/s
  > Average: up: 0.1s, 10.053614 MiB/s  down: 0.0s, 21.061805 MiB/s  del: 0.0s, 29.171138 MiB/s

  4 MiB:
    Round 0: up: 0.2s, 25.943172 MiB/s  down: 0.1s, 47.268418 MiB/s  del: 0.0s, 129.082317 MiB/s
    Round 1: up: 0.1s, 31.802798 MiB/s  down: 0.1s, 58.338692 MiB/s  del: 0.0s, 127.477726 MiB/s
    Round 2: up: 0.2s, 25.786307 MiB/s  down: 0.1s, 59.137590 MiB/s  del: 0.0s, 139.145713 MiB/s
  > Average: up: 0.1s, 27.581177 MiB/s  down: 0.1s, 54.341156 MiB/s  del: 0.0s, 131.704800 MiB/s

  16 MiB:
    Round 0: up: 0.3s, 50.277775 MiB/s  down: 0.2s, 84.774945 MiB/s  del: 0.0s, 533.634950 MiB/s
    Round 1: up: 0.3s, 47.636630 MiB/s  down: 0.2s, 74.985993 MiB/s  del: 0.0s, 536.407456 MiB/s
    Round 2: up: 0.3s, 50.470809 MiB/s  down: 0.2s, 66.961883 MiB/s  del: 0.0s, 528.541104 MiB/s
  > Average: up: 0.3s, 49.427314 MiB/s  down: 0.2s, 74.877143 MiB/s  del: 0.0s, 532.841212 MiB/s

  64 MiB:
    Round 0: up: 0.6s, 114.027311 MiB/s  down: 0.4s, 174.476303 MiB/s  del: 0.0s, 2.091279687 GiB/s
    Round 1: up: 0.6s, 99.738669 MiB/s  down: 0.5s, 119.034149 MiB/s  del: 0.0s, 1.912887384 GiB/s
    Round 2: up: 0.7s, 88.582912 MiB/s  down: 0.4s, 177.090087 MiB/s  del: 0.0s, 1.907112094 GiB/s
  > Average: up: 0.6s, 99.717834 MiB/s  down: 0.4s, 151.674246 MiB/s  del: 0.0s, 1.966827394 GiB/s

Test complete


---------- Testing b2 ----------

  1 KiB:
    Round 0: up: 0.6s, 1.632 KiB/s  down: 0.3s, 3.369 KiB/s  del: 0.1s, 7.010 KiB/s
    Round 1: up: 0.1s, 10.713 KiB/s  down: 0.1s, 14.378 KiB/s  del: 0.1s, 6.756 KiB/s
    Round 2: up: 0.1s, 9.307 KiB/s  down: 0.1s, 13.157 KiB/s  del: 0.2s, 6.109 KiB/s
  > Average: up: 0.3s, 3.688 KiB/s  down: 0.1s, 6.782 KiB/s  del: 0.2s, 6.603 KiB/s

  4 KiB:
    Round 0: up: 0.1s, 47.491 KiB/s  down: 0.1s, 58.588 KiB/s  del: 0.1s, 27.429 KiB/s
    Round 1: up: 0.1s, 55.791 KiB/s  down: 0.1s, 36.254 KiB/s  del: 0.1s, 29.721 KiB/s
    Round 2: up: 0.1s, 54.227 KiB/s  down: 0.1s, 57.475 KiB/s  del: 0.5s, 8.558 KiB/s
  > Average: up: 0.1s, 52.245 KiB/s  down: 0.1s, 48.348 KiB/s  del: 0.2s, 16.047 KiB/s

  16 KiB:
    Round 0: up: 0.1s, 124.974 KiB/s  down: 0.1s, 228.552 KiB/s  del: 0.1s, 115.020 KiB/s
    Round 1: up: 0.2s, 102.952 KiB/s  down: 0.1s, 202.716 KiB/s  del: 0.2s, 104.260 KiB/s
    Round 2: up: 0.1s, 114.477 KiB/s  down: 0.1s, 192.710 KiB/s  del: 0.2s, 103.293 KiB/s
  > Average: up: 0.1s, 113.420 KiB/s  down: 0.1s, 206.932 KiB/s  del: 0.1s, 107.270 KiB/s

  256 KiB:
    Round 0: up: 0.3s, 732.706 KiB/s  down: 0.2s, 1.071789 MiB/s  del: 0.1s, 1.676716 MiB/s
    Round 1: up: 0.2s, 1.278517 MiB/s  down: 0.1s, 1.832967 MiB/s  del: 0.2s, 1.327880 MiB/s
    Round 2: up: 0.2s, 1.628537 MiB/s  down: 0.1s, 2.523242 MiB/s  del: 0.2s, 1.459682 MiB/s
  > Average: up: 0.2s, 1.073820 MiB/s  down: 0.2s, 1.600086 MiB/s  del: 0.2s, 1.474517 MiB/s

  1 MiB:
    Round 0: up: 0.2s, 4.177527 MiB/s  down: 0.2s, 4.659028 MiB/s  del: 0.1s, 6.964702 MiB/s
    Round 1: up: 0.1s, 6.996722 MiB/s  down: 0.1s, 7.483574 MiB/s  del: 0.2s, 6.141370 MiB/s
    Round 2: up: 0.2s, 6.416095 MiB/s  down: 0.3s, 3.674148 MiB/s  del: 0.1s, 6.958221 MiB/s
  > Average: up: 0.2s, 5.574569 MiB/s  down: 0.2s, 4.835317 MiB/s  del: 0.2s, 6.664798 MiB/s

  4 MiB:
    Round 0: up: 0.3s, 13.005170 MiB/s  down: 0.3s, 14.012869 MiB/s  del: 0.1s, 26.978004 MiB/s
    Round 1: up: 0.2s, 18.244433 MiB/s  down: 0.3s, 14.592259 MiB/s  del: 0.1s, 30.275040 MiB/s
    Round 2: up: 0.2s, 18.937427 MiB/s  down: 0.3s, 14.830360 MiB/s  del: 0.2s, 24.934371 MiB/s
  > Average: up: 0.2s, 16.259350 MiB/s  down: 0.3s, 14.470265 MiB/s  del: 0.1s, 27.222483 MiB/s

  16 MiB:
    Round 0: up: 0.6s, 26.142590 MiB/s  down: 0.9s, 18.781897 MiB/s  del: 0.2s, 106.328421 MiB/s
    Round 1: up: 0.6s, 25.526316 MiB/s  down: 0.8s, 19.362094 MiB/s  del: 0.1s, 112.098273 MiB/s
    Round 2: up: 0.6s, 27.248557 MiB/s  down: 0.9s, 18.342877 MiB/s  del: 0.1s, 109.495415 MiB/s
  > Average: up: 0.6s, 26.286688 MiB/s  down: 0.9s, 18.819734 MiB/s  del: 0.1s, 109.256304 MiB/s

  64 MiB:
    Round 0: up: 2.6s, 24.949625 MiB/s  down: 1.4s, 44.722725 MiB/s  del: 0.1s, 441.513672 MiB/s
    Round 1: up: 2.3s, 28.404804 MiB/s  down: 1.4s, 47.183331 MiB/s  del: 0.1s, 449.441129 MiB/s
    Round 2: up: 2.0s, 32.758203 MiB/s  down: 1.4s, 45.961158 MiB/s  del: 0.2s, 415.910755 MiB/s
  > Average: up: 2.3s, 28.351963 MiB/s  down: 1.4s, 45.933771 MiB/s  del: 0.1s, 435.143147 MiB/s

Test complete


---------- Testing gs ----------

  1 KiB:
    Round 0: up: 0.3s, 3.620 KiB/s  down: 0.1s, 8.228 KiB/s  del: 0.2s, 6.412 KiB/s
    Round 1: up: 0.2s, 6.374 KiB/s  down: 0.1s, 8.622 KiB/s  del: 0.1s, 6.676 KiB/s
    Round 2: up: 0.4s, 2.709 KiB/s  down: 0.1s, 6.962 KiB/s  del: 0.2s, 6.622 KiB/s
  > Average: up: 0.3s, 3.739 KiB/s  down: 0.1s, 7.871 KiB/s  del: 0.2s, 6.568 KiB/s

  4 KiB:
    Round 0: up: 0.2s, 22.057 KiB/s  down: 0.1s, 34.261 KiB/s  del: 0.2s, 25.786 KiB/s
    Round 1: up: 0.2s, 25.325 KiB/s  down: 0.1s, 33.979 KiB/s  del: 0.2s, 25.646 KiB/s
    Round 2: up: 0.2s, 23.270 KiB/s  down: 0.1s, 35.244 KiB/s  del: 0.2s, 26.625 KiB/s
  > Average: up: 0.2s, 23.474 KiB/s  down: 0.1s, 34.486 KiB/s  del: 0.2s, 26.012 KiB/s

  16 KiB:
    Round 0: up: 0.2s, 90.835 KiB/s  down: 0.1s, 137.327 KiB/s  del: 0.2s, 105.143 KiB/s
    Round 1: up: 0.1s, 132.469 KiB/s  down: 0.1s, 146.978 KiB/s  del: 0.1s, 108.598 KiB/s
    Round 2: up: 0.2s, 100.099 KiB/s  down: 0.1s, 136.586 KiB/s  del: 0.2s, 104.986 KiB/s
  > Average: up: 0.2s, 105.086 KiB/s  down: 0.1s, 140.141 KiB/s  del: 0.2s, 106.216 KiB/s

  256 KiB:
    Round 0: up: 0.2s, 1.121066 MiB/s  down: 0.1s, 2.396017 MiB/s  del: 0.1s, 1.698036 MiB/s
    Round 1: up: 0.2s, 1.099685 MiB/s  down: 0.1s, 2.326834 MiB/s  del: 0.1s, 1.695386 MiB/s
    Round 2: up: 0.2s, 1.044684 MiB/s  down: 0.1s, 2.740093 MiB/s  del: 0.2s, 1.505852 MiB/s
  > Average: up: 0.2s, 1.087513 MiB/s  down: 0.1s, 2.475086 MiB/s  del: 0.2s, 1.627933 MiB/s

  1 MiB:
    Round 0: up: 0.2s, 4.115500 MiB/s  down: 0.1s, 9.266190 MiB/s  del: 0.2s, 6.271753 MiB/s
    Round 1: up: 0.2s, 4.372369 MiB/s  down: 0.1s, 8.139601 MiB/s  del: 0.2s, 6.288246 MiB/s
    Round 2: up: 0.3s, 3.042409 MiB/s  down: 0.2s, 4.031639 MiB/s  del: 0.2s, 4.541512 MiB/s
  > Average: up: 0.3s, 3.748221 MiB/s  down: 0.2s, 6.265489 MiB/s  del: 0.2s, 5.569346 MiB/s

  4 MiB:
    Round 0: up: 2.5s, 1.572512 MiB/s  down: 0.2s, 22.024136 MiB/s  del: 0.2s, 25.313898 MiB/s
    Round 1: up: 0.5s, 7.325633 MiB/s  down: 0.2s, 22.963173 MiB/s  del: 0.1s, 27.784530 MiB/s
    Round 2: up: 0.4s, 9.728974 MiB/s  down: 0.2s, 20.401180 MiB/s  del: 0.2s, 23.157118 MiB/s
  > Average: up: 1.2s, 3.427718 MiB/s  down: 0.2s, 21.743937 MiB/s  del: 0.2s, 25.278376 MiB/s

  16 MiB:
    Round 0: up: 0.6s, 26.697733 MiB/s  down: 0.4s, 41.701743 MiB/s  del: 0.2s, 105.310931 MiB/s
    Round 1: up: 0.5s, 31.094341 MiB/s  down: 0.3s, 56.866618 MiB/s  del: 0.2s, 104.422290 MiB/s
    Round 2: up: 0.5s, 35.168471 MiB/s  down: 0.3s, 48.127999 MiB/s  del: 0.2s, 105.386674 MiB/s
  > Average: up: 0.5s, 30.596286 MiB/s  down: 0.3s, 48.121074 MiB/s  del: 0.2s, 105.038134 MiB/s

  64 MiB:
    Round 0: up: 0.6s, 101.201155 MiB/s  down: 0.4s, 146.064032 MiB/s  del: 0.2s, 402.406710 MiB/s
    Round 1: up: 0.7s, 96.182743 MiB/s  down: 0.5s, 119.474566 MiB/s  del: 0.2s, 419.873235 MiB/s
    Round 2: up: 0.8s, 85.246020 MiB/s  down: 0.4s, 154.944588 MiB/s  del: 0.2s, 424.879479 MiB/s
  > Average: up: 0.7s, 93.723820 MiB/s  down: 0.5s, 138.438868 MiB/s  del: 0.2s, 415.493598 MiB/s

Test complete

[root@hbtest ~]# 

For things like delete that happen quickly or are listed as 0.0s, you can use the following bytes/s timing to get an idea of how fast it really was.

1 Like

Thanks again to share. I re-executed my previous AWS S3 test and I got same performance as before: ~20s to upload from 4 threads to s3 us regions (10*64MB)

I was thinking what causes the difference (as you have seen significant better). So far I have the following ideas:

  1. 64MB is so small, inter-region latency may have higher impact.
  2. Would be better to test with bigger files (it’s also more realistic for backup use cases)
  3. Test results should include all the regional settings (which region, which gateway were was used…)
  4. Client side (but at least server side) encryption should be turned on with every provider (apple-to-apple)

Did you use eu or us storj gateway? eu may have better performance…

HashBackup is not doing anything fancy for this test. It uses Python 2.7.15, the boto2 library, multipart is disabled, it uses the standard storage class since none is specified in dest.conf. The VM I’m using is a 512M 1-CPU Vultr Cloud Compute instance in Chicago that costs $2.50/mo. Definitely nothing special there!

The quote in the first post says that Storj DCS has “upload/download speeds that are on par (or better) than the big three cloud providers.” For this test, files are being uploaded and downloaded to several S3-like services. The test data is from /dev/urandom so it can’t be compressed. There is no encryption happening because S3 services don’t normally do encryption.

For uploads, HB is doing:

    key = self.bucket.new_key(keyname)
    key.get_contents_to_filename(pathname)

For downloads, it does:

    key = self.bucket.new_key(keyname)
    key.set_contents_from_filename(pathname, headers, cb=sendcb, num_cb=numcb (=1 for test))

For deletes:

    self.bucket.delete_key(keyname)

Here’s the dest.conf file I used for testing:

destname sjs3
#off
type s3
host gateway.us1.storjshare.io
secure
partsize 64m
accesskey xxx
secretkey xxx
bucket hbtest
dir s3dir
workers 1

destname s3
#off
multipart false
type s3
secure
accesskey xxx
secretkey xxx
location us-east-2
bucket hashbackup-us-east-2
dir sjtest
workers 1

destname b2
#off
timeout None
type b2
accountid xxx
appkey xxx
bucket Hashbackupx
dir test
workers 1

DestName gs
#off
Type gs
multipart false
host storage.googleapis.com
Accesskey xxx
Secretkey xxx
Bucket hashbackup
secure
workers 1

For Backblaze, this test is not using the B2 S3 gateway but rather the native B2 API. The others are all using the same boto2 API.

I did have a question about the S3 MT Gateway. As I was looking through the MinIO code, I noticed there is a significant caching layer. Is that being used by Storj? Maybe the other S3 services are doing caching too, I dunno. The download test results are unrealistically fast if the data is coming from a cache on the gateway rather than from storage nodes.

Edit: I noticed I forgot to add multipart false for Storj S3 in dest.conf, so it was actually doing a multipart upload. However, it was doing it with one part and not honoring the fixed partsize (a new feature) for this particular file size.

I re-ran the Storj S3 and Amazon S3 tests. The times are not comparable with the previous test because it was run at night and this one is during the day, but it is a fair comparison between the two services. If my VM is getting throttled (dunno), S3 would be at a disadvantage I think since it is running after the Storj test. I will say, there is a huge amount of variance in the S3 64MB test this time.

[root@hbtest ~]# hb dest -c hb test sjs3 s3
HashBackup #2569 Copyright 2009-2021 HashBackup, LLC
Using destinations in dest.conf

---------- Testing sjs3 ----------

  1 KiB:
    Round 0: up: 0.3s, 2.902 KiB/s  down: 0.1s, 7.687 KiB/s  del: 0.3s, 3.562 KiB/s
    Round 1: up: 0.5s, 1.840 KiB/s  down: 0.8s, 1.293 KiB/s  del: 0.3s, 3.324 KiB/s
    Round 2: up: 0.7s, 1.364 KiB/s  down: 0.2s, 4.008 KiB/s  del: 0.3s, 3.221 KiB/s
  > Average: up: 0.5s, 1.850 KiB/s  down: 0.4s, 2.602 KiB/s  del: 0.3s, 3.363 KiB/s

  4 KiB:
    Round 0: up: 0.6s, 6.461 KiB/s  down: 0.5s, 8.061 KiB/s  del: 0.3s, 13.330 KiB/s
    Round 1: up: 0.3s, 13.047 KiB/s  down: 0.2s, 25.931 KiB/s  del: 0.2s, 21.944 KiB/s
    Round 2: up: 0.7s, 5.695 KiB/s  down: 0.1s, 42.411 KiB/s  del: 0.3s, 15.722 KiB/s
  > Average: up: 0.5s, 7.371 KiB/s  down: 0.2s, 16.112 KiB/s  del: 0.2s, 16.287 KiB/s

  16 KiB:
    Round 0: up: 1.3s, 12.013 KiB/s  down: 0.5s, 33.096 KiB/s  del: 1.4s, 11.146 KiB/s
    Round 1: up: 1.3s, 12.085 KiB/s  down: 0.4s, 43.610 KiB/s  del: 1.1s, 14.201 KiB/s
    Round 2: up: 2.1s, 7.476 KiB/s  down: 0.7s, 22.077 KiB/s  del: 4.0s, 3.972 KiB/s
  > Average: up: 1.6s, 10.008 KiB/s  down: 0.5s, 30.475 KiB/s  del: 2.2s, 7.284 KiB/s

  256 KiB:
    Round 0: up: 1.9s, 134.241 KiB/s  down: 0.6s, 436.224 KiB/s  del: 3.5s, 72.339 KiB/s
    Round 1: up: 1.8s, 145.924 KiB/s  down: 0.4s, 664.690 KiB/s  del: 1.2s, 220.191 KiB/s
    Round 2: up: 1.4s, 189.063 KiB/s  down: 0.4s, 607.361 KiB/s  del: 1.2s, 205.712 KiB/s
  > Average: up: 1.7s, 153.128 KiB/s  down: 0.5s, 551.134 KiB/s  del: 2.0s, 129.163 KiB/s

  1 MiB:
    Round 0: up: 1.4s, 724.633 KiB/s  down: 0.5s, 1.970260 MiB/s  del: 1.3s, 803.454 KiB/s
    Round 1: up: 1.3s, 777.231 KiB/s  down: 0.5s, 1.968806 MiB/s  del: 1.2s, 827.423 KiB/s
    Round 2: up: 1.5s, 691.831 KiB/s  down: 0.6s, 1.640111 MiB/s  del: 1.3s, 817.220 KiB/s
  > Average: up: 1.4s, 729.560 KiB/s  down: 0.5s, 1.845944 MiB/s  del: 1.3s, 815.914 KiB/s

  4 MiB:
    Round 0: up: 1.7s, 2.306163 MiB/s  down: 1.1s, 3.483975 MiB/s  del: 2.3s, 1.735723 MiB/s
    Round 1: up: 2.5s, 1.598370 MiB/s  down: 1.2s, 3.256672 MiB/s  del: 6.0s, 683.117 KiB/s
    Round 2: up: 3.4s, 1.193675 MiB/s  down: 1.3s, 2.981621 MiB/s  del: 3.5s, 1.139901 MiB/s
  > Average: up: 2.5s, 1.581438 MiB/s  down: 1.2s, 3.227616 MiB/s  del: 3.9s, 1.016120 MiB/s

  16 MiB:
    Round 0: up: 3.2s, 5.042793 MiB/s  down: 1.9s, 8.422910 MiB/s  del: 3.8s, 4.196580 MiB/s
    Round 1: up: 3.3s, 4.813956 MiB/s  down: 1.5s, 10.736216 MiB/s  del: 2.8s, 5.687354 MiB/s
    Round 2: up: 3.1s, 5.167349 MiB/s  down: 1.9s, 8.226809 MiB/s  del: 2.6s, 6.230093 MiB/s
  > Average: up: 3.2s, 5.003711 MiB/s  down: 1.8s, 8.997652 MiB/s  del: 3.1s, 5.220758 MiB/s

  64 MiB:
    Round 0: up: 5.1s, 12.466948 MiB/s  down: 3.6s, 17.629435 MiB/s  del: 3.0s, 21.059823 MiB/s
    Round 1: up: 5.2s, 12.403226 MiB/s  down: 2.8s, 22.720971 MiB/s  del: 2.5s, 25.270583 MiB/s
    Round 2: up: 4.8s, 13.377655 MiB/s  down: 3.1s, 20.568541 MiB/s  del: 0.9s, 69.185608 MiB/s
  > Average: up: 5.0s, 12.734106 MiB/s  down: 3.2s, 20.086579 MiB/s  del: 2.2s, 29.553926 MiB/s

Test complete


---------- Testing s3 ----------

  1 KiB:
    Round 0: up: 0.1s, 7.058 KiB/s  down: 0.0s, 47.828 KiB/s  del: 0.0s, 40.587 KiB/s
    Round 1: up: 0.0s, 21.844 KiB/s  down: 0.0s, 47.715 KiB/s  del: 0.0s, 39.282 KiB/s
    Round 2: up: 0.0s, 21.936 KiB/s  down: 0.0s, 40.109 KiB/s  del: 0.0s, 36.357 KiB/s
  > Average: up: 0.1s, 12.873 KiB/s  down: 0.0s, 44.912 KiB/s  del: 0.0s, 38.660 KiB/s

  4 KiB:
    Round 0: up: 0.0s, 111.772 KiB/s  down: 0.0s, 177.779 KiB/s  del: 0.0s, 164.786 KiB/s
    Round 1: up: 0.0s, 111.813 KiB/s  down: 0.0s, 160.405 KiB/s  del: 0.0s, 154.054 KiB/s
    Round 2: up: 0.0s, 107.608 KiB/s  down: 0.0s, 178.308 KiB/s  del: 0.0s, 141.148 KiB/s
  > Average: up: 0.0s, 110.362 KiB/s  down: 0.0s, 171.748 KiB/s  del: 0.0s, 152.715 KiB/s

  16 KiB:
    Round 0: up: 0.0s, 433.629 KiB/s  down: 0.0s, 529.467 KiB/s  del: 0.0s, 669.288 KiB/s
    Round 1: up: 0.0s, 431.011 KiB/s  down: 0.0s, 707.153 KiB/s  del: 0.0s, 597.570 KiB/s
    Round 2: up: 0.0s, 413.713 KiB/s  down: 0.0s, 634.263 KiB/s  del: 0.0s, 611.877 KiB/s
  > Average: up: 0.0s, 425.932 KiB/s  down: 0.0s, 614.824 KiB/s  del: 0.0s, 624.755 KiB/s

  256 KiB:
    Round 0: up: 0.1s, 2.538101 MiB/s  down: 0.0s, 5.997518 MiB/s  del: 0.0s, 9.984822 MiB/s
    Round 1: up: 0.1s, 2.621919 MiB/s  down: 0.1s, 3.487291 MiB/s  del: 0.0s, 10.590072 MiB/s
    Round 2: up: 0.1s, 3.503077 MiB/s  down: 0.0s, 9.991577 MiB/s  del: 0.0s, 11.015148 MiB/s
  > Average: up: 0.1s, 2.827898 MiB/s  down: 0.0s, 5.419315 MiB/s  del: 0.0s, 10.512883 MiB/s

  1 MiB:
    Round 0: up: 0.1s, 10.756391 MiB/s  down: 0.1s, 19.747659 MiB/s  del: 0.0s, 41.184423 MiB/s
    Round 1: up: 0.1s, 10.579121 MiB/s  down: 0.0s, 26.109635 MiB/s  del: 0.0s, 43.142842 MiB/s
    Round 2: up: 0.1s, 12.316771 MiB/s  down: 0.0s, 33.562756 MiB/s  del: 0.0s, 31.202279 MiB/s
  > Average: up: 0.1s, 11.165536 MiB/s  down: 0.0s, 25.266587 MiB/s  del: 0.0s, 37.731681 MiB/s

  4 MiB:
    Round 0: up: 0.1s, 32.538578 MiB/s  down: 0.1s, 57.252307 MiB/s  del: 0.0s, 143.622103 MiB/s
    Round 1: up: 0.1s, 27.201826 MiB/s  down: 0.1s, 68.029714 MiB/s  del: 0.0s, 159.458019 MiB/s
    Round 2: up: 0.1s, 27.104661 MiB/s  down: 0.1s, 67.325112 MiB/s  del: 0.0s, 152.887074 MiB/s
  > Average: up: 0.1s, 28.738657 MiB/s  down: 0.1s, 63.803583 MiB/s  del: 0.0s, 151.708734 MiB/s

  16 MiB:
    Round 0: up: 0.3s, 56.636589 MiB/s  down: 0.2s, 83.164833 MiB/s  del: 0.0s, 508.743501 MiB/s
    Round 1: up: 0.3s, 51.669898 MiB/s  down: 0.2s, 83.108290 MiB/s  del: 0.0s, 573.786009 MiB/s
    Round 2: up: 0.3s, 52.135092 MiB/s  down: 0.2s, 86.323200 MiB/s  del: 0.0s, 648.006643 MiB/s
  > Average: up: 0.3s, 53.389335 MiB/s  down: 0.2s, 84.172301 MiB/s  del: 0.0s, 571.251087 MiB/s

  64 MiB:
    Round 0: up: 0.5s, 120.824512 MiB/s  down: 0.4s, 179.387261 MiB/s  del: 0.0s, 2.363041421 GiB/s
    Round 1: up: 0.6s, 110.260992 MiB/s  down: 0.9s, 67.490472 MiB/s  del: 0.0s, 2.320741521 GiB/s
    Round 2: up: 0.6s, 98.801093 MiB/s  down: 1.9s, 34.033915 MiB/s  del: 0.0s, 2.629934689 GiB/s
  > Average: up: 0.6s, 109.221171 MiB/s  down: 1.1s, 60.272559 MiB/s  del: 0.0s, 2.430492414 GiB/s

Test complete

[root@hbtest ~]# date
Thu Oct 14 17:41:12 UTC 2021
[root@hbtest ~]# 
1 Like

The caching layer isn’t currently used because of, e.g. security implications of using it, but we are considering developing a secure object caching for “hot” objects. We will most certainly be posting a blueprint here when we get to it. :slight_smile:

2 Likes