The Filezilla testing made one thing apparent for me: Upload to Tardigrade needs improvement.
The requirement to upload almost 3 times the data quantity is a huge road block. Upload bandwidth is already limited due to asymmetric DSL and Tardigrade makes this disadvantage even worse.
I have no idea if and how this can be improved, but it definitely should. A user should not have to upload more than what the data size is, maybe 1.3 or 1.5 fold, but not 2.7x.
Same goes for downloading. However I am not quite sure how this works currently. I am just thinking, if each piece of data can be reconstructed by downloading only the minimum number of chunks, why download the whole thing? But as I am not sure how this is implemented, maybe someone can shed a light if there is room for improvement.
But improving upload would be really important.