root@server030:~# time uplink cp ./1G.data sj://test/1g-test-file2
2020-01-29T02:30:27.167+0100 INFO Configuration loaded from: /root/.local/share/storj/uplink/config.yaml
1.00 GiB / 1.00 GiB [------------------------------------------------------------------] 100.00% 18.62 MiB p/s
Created sj://test/1g-test-file2
real 0m55.348s
user 1m41.474s
sys 1m42.360s
Download of the same 1GB file :
root@server030:~# time uplink cp sj://test/1g-test-file2 /tmp/1gb-test-file02.tmp
2020-01-29T02:33:25.297+0100 INFO Configuration loaded from: /root/.local/share/storj/uplink/config.yaml
1.00 GiB / 1.00 GiB [------------------------------------------------------------------] 100.00% 23.87 MiB p/s
Downloaded sj://test/1g-test-file2 to /tmp/1gb-test-file02.tmp
real 0m43.283s
user 0m43.905s
sys 0m10.110s
Random spikes up to 1,4Gbit/s while uploading and around 1,2Gbit/s while downloading.
Deleting the 1GB file :
root@server030:~# time uplink rm sj://test/1g-test-file2
2020-01-29T02:46:46.086+0100 INFO Configuration loaded from: /root/.local/share/storj/uplink/config.yaml
2020-01-29T02:46:46.659+0100 INFO running on version v0.30.5
Deleted sj://test/1g-test-file2
real 0m10.468s
user 0m0.110s
sys 0m0.040s
Looks like uploading is 300-400% faster, and deleting is 50% faster. The performance team was just formed this week, so many more improvements to come.
So uploading was 270 Mbps, downloading 150 Mbps and deleting 3.6 segments/s. I guess there’s still room from improvement but this is already much better than it was in some other threads. I lack bandwidth to do these tests myself.
Wondering why the deletion takes so long; all the node should do is send the file ID to delete to the satellite.
It actually sends data for every segment or maybe even piece that should be deleted. And I think right now the satellite still sends out the deletes to the nodes before returning a success to the uplink. Although those messages are small, it’s still a lot of them. I have no doubt more optimization is in the works for this.
I see. I thought the instant delete was implemented with v0.31.
Did the original “return success to uplink immediately” fix include the uplink sending just the file ID for delete operations or is the uplink still going to send each segment/piece ID for deletion with that fix?
I didn’t dive into the code. But I imagine it would at least be on the segment level. Since that’s the entity satellites mostly deal with. Hopefully not individual pieces though. And I’ve argued before for giving the node the possibility to send a complete list of segments perhaps even across multiple files as well. So only one round trip to the satellite is needed.
Right, either segment ID list or file ID list, any kind of individual segment deletion is unacceptable due to latency. Shouldn’t be too much to change…
Maybe parallel requests but lists are much neater and it wouldn’t make sense to delete only half the segments in any case, it’s always going to be all the segments…