I have recently done some testing using Tardigrade as a backend for
duplicati backups and also compared it with simple upload of large-ish archives with
During these tests I have monitored metrics such as traffic through the network interface on the executing host.
This data shows, that I need about 9.5h to upload about 10GB of data to an
sj://mybucket, while network upload traffic was consistently about 1.2 MB/s during this time (which amounts to approximately 40GB, or 4 times the data size stored in the bucket.)
I understand this must be due to the uploading of redundant data, to ensure data durability.
However, the whitepaper leads me to expect more something in the range of a factor 1.5 (as in
f=data_uploaded / data_stored), assuming a sufficiently large number of nodes. Must I conclude from this that there are “not enough” nodes in the Tardigrade network to achieve this efficiency, or is there another explanation?
TIA for any insights, both practical and theoretical.