Amazon S3 vs. Tardigrade

Upload a 1GB file (openssl rnd generated)

root@server030:~# time uplink cp ./ sj://test/1g-test-file2                                     
2020-01-29T02:30:27.167+0100    INFO    Configuration loaded from: /root/.local/share/storj/uplink/config.yaml
1.00 GiB / 1.00 GiB [------------------------------------------------------------------] 100.00% 18.62 MiB p/s
Created sj://test/1g-test-file2

real    0m55.348s
user    1m41.474s
sys     1m42.360s

Download of the same 1GB file :

root@server030:~# time uplink cp sj://test/1g-test-file2 /tmp/1gb-test-file02.tmp  
2020-01-29T02:33:25.297+0100    INFO    Configuration loaded from: /root/.local/share/storj/uplink/config.yaml
1.00 GiB / 1.00 GiB [------------------------------------------------------------------] 100.00% 23.87 MiB p/s
Downloaded sj://test/1g-test-file2 to /tmp/1gb-test-file02.tmp

real    0m43.283s
user    0m43.905s
sys     0m10.110s

Random spikes up to 1,4Gbit/s while uploading and around 1,2Gbit/s while downloading.

Deleting the 1GB file :

root@server030:~# time uplink rm  sj://test/1g-test-file2      
2020-01-29T02:46:46.086+0100    INFO    Configuration loaded from: /root/.local/share/storj/uplink/config.yaml
2020-01-29T02:46:46.659+0100    INFO    running on version v0.30.5
Deleted sj://test/1g-test-file2

real    0m10.468s
user    0m0.110s
sys     0m0.040s
  • 4 x E5-4669 V4
  • 512GB RAM
  • Intel DC P3700 NVMe
  • 10Gbit/s Interent
  • OS : Ubuntu 18.04.3 LTS
  • Satellite: Europe-West-1
  • Location : Denmark

Might also be useful to see a 10 GB file speed test to compare to @flo

Compared to my last test, upload has improved a lot; downloading the file took a little longer:

Uploading 10 GB:
Tardigrade : 296.5s

Downloading 10 GB:
Tardigrade 535.5s

Deleting one 10 GB file:
Tardigrade: 43.3s

Looks like uploading is 300-400% faster, and deleting is 50% faster. The performance team was just formed this week, so many more improvements to come.

1 Like

some of it shold be downloading in your words?

1 Like

Uploading big files to tardigrade seem to take longer than uploading smaller files with the same total size

So uploading was 270 Mbps, downloading 150 Mbps and deleting 3.6 segments/s. I guess there’s still room from improvement but this is already much better than it was in some other threads. I lack bandwidth to do these tests myself.

Wondering why the deletion takes so long; all the node should do is send the file ID to delete to the satellite.

1 Like

It actually sends data for every segment or maybe even piece that should be deleted. And I think right now the satellite still sends out the deletes to the nodes before returning a success to the uplink. Although those messages are small, it’s still a lot of them. I have no doubt more optimization is in the works for this.

I see. I thought the instant delete was implemented with v0.31.
Did the original “return success to uplink immediately” fix include the uplink sending just the file ID for delete operations or is the uplink still going to send each segment/piece ID for deletion with that fix?

I didn’t dive into the code. But I imagine it would at least be on the segment level. Since that’s the entity satellites mostly deal with. Hopefully not individual pieces though. And I’ve argued before for giving the node the possibility to send a complete list of segments perhaps even across multiple files as well. So only one round trip to the satellite is needed.


Right, either segment ID list or file ID list, any kind of individual segment deletion is unacceptable due to latency. Shouldn’t be too much to change…

Maybe parallel requests but lists are much neater and it wouldn’t make sense to delete only half the segments in any case, it’s always going to be all the segments…

1 Like

Not sure what you mean.

1 Like

You said uploading twice