Two more [Tech Previews] ! - RClone and Restic

In addition to the recent Tech Preview of the native QNAP Application (Backup on QNAP NAS [Tech Preview] - Testers Needed), we also released Tech Preview support for RClone and Restic tools

We posted a How To on the Tardigrade documentation for RClone here

Rclone, often referred to as rsync for cloud storage, enables developers to directly map files and directories from their filesystem to Tardigrade through a command-line tool. This Tardigrade tool supports useful commands like copy, sync, tree, cat, and rcat.

As described below, Rclone can also be used with Restic to provide regular backup scheduling to the decentralized cloud.

Get started testing out Restic with Tardigrade, here

Restic is an easy to use developer tool for automating backups. Restic is widely adopted by System Administrators across the world because of its speed, efficiency, and cross-platform compatibility.

To backup your data, restic creates and manages snapshots. Each snapshot records a set of files and directories which are stored in a set of data files, known as a repository. Backups can be mounted and browsed through, and the data in each snapshot is de-duplicated before it is stored in the repository.

Let us know what you think!

3 Likes

This is exciting for sure! I had tested out earlier versions of this. I’ve now downloaded the latest binary and am testing my full upload again. Thanks @keleffew!

1 Like

Let us know how it goes! I’m interested to see if the performance is better now that we’ve switched over to libuplink 1.x

2 Likes

Copying 35GB from s3 to tardigrade. Upload is only 600Kb/s on a while I have a 100Mbit connection. Maybe because it’s over WiFi. CPU load is 0. I would expect some since I was under the impression that the files get encrypted on my device.

That does seem a bit low. Can you try it with Ethernet?

Will try a sync in 15 hours when it’s finished

I read that rclone does not copy files which have not been changed. So I canceled and tried with LAN. The speed is now 900Kb/s - 1000Kb/s. CPU load is now between 10-40% for the rclone task.

I’m testing out rclone by itself right now, without restic on top. I’m trying to hit it hard with --transfers 50.

rclone --verbose -P -L sync --transfers 50 ~/Pictures tardigrade:rclone

So far so good! I’m getting practically line rate on my 750/750 fiber connection, given other bandwidth intensive things running (including my Storj nodes). I believe rclone shows the effective transfer rate, taking into account the erasure coding that the Storj uplink backend does. My router is definitely indicating the higher raw transfer rate.

Transferred:       19.683G / 31.845 GBytes, 62%, 18.508 MBytes/s, ETA 11m12s
Errors:                92 (retrying may help)
Checks:              1436 / 1436, 100%
Transferred:          825 / 7743, 11%
Elapsed time:      18m9.0s
Transferring:

There are errors, but I’m sure another sync would help, perhaps with fewer simultaneous transfers.

I’ll test out restic with the same dataset after this.

3 Likes

Might want to increase the number of transfers if there is a large difference between the transfer speed and your internet max.

I tried with 10 transfers, but the total number shown did not change.

I have some Errors:
Failed to copy: uplink: segment error: ecclient error: successful puts (79) less than success threshold (80)

Failed to copy: uplink: segment error: ecclient error: successful puts (29) less than or equal to repair threshold (35)

What do they mean?

The first one I thought wouldn’t be a problem anymore. The uplink tries to upload at least 80 RS pieces to nodes before it disconnects transfers to other nodes. You only need 29 to restore the file.

The second error is more serious. The network repairs a segment when the number of pieces drops below 35. Well away from the 29 minimum needed to recover the data. This is called the repair threshold and an upload that isn’t able to at least upload 35 pieces should not be accepted.

I was under the impression that uploads between the repair and success threshold would succeed now and no longer return an error, but I could be wrong.

I’ve now completed a few tests with restic with the same ~31GB dataset as earlier. This is a Pictures folder that has a bunch of raw CR2 files, as well as a large number of small files associated with metadata from an old photo manager.

The combined results are in the following table. Note that restic chunk sizes range from 4-6 MB for this dataset.

# restic Target rclone Target Time (H:MM:SS) Average Rate (MB/s)
1 rclone Tardigrade backend 2:12:22 4.06
2 rclone S3 backend over Tardigrade Gateway 2:11:53 4.07
3 local, followed by rclone sync Tardigrade backend 2:24:52 3.71

Just to explain Test 3 above, it consisted of using restic with a local endpoint to create the repo, and then using rclone sync to sync that local restic repo to Tardigrade. Also, in all cases, the transfer rate is the effective transfer rate, accounting for the fact that the raw transfer rate is ~2.7x the effective rate due to erasure coding.

Looks like the uplink is performing similarly across the different methods, which is good to see. I do wish it was a little faster, but with the local caching that restic does, subsequent incremental backups are much faster.

1 Like

Strange that is was so slow for me (800 - 1000 Kb/s).
I was also testing copy from external hdd (USB 2) to tardigrade and it also had the same speed.
I was also switching to my more powerful desktop PC also the same.

Could it be my router that can’t handle so many connections?

Hi tylkomat - There are a number of environmental conditions that can impact the performance of the platform. Your upstream bandwidth is probably the most important factor, but it’s also possible that the number of concurrent connections allowed by your router could be an issue if there is a constraint on your network, but that’s not something we’ve seen so far with customers. Distributed storage performs work best with larger file sizes and highly available bandwidth (consistent, fast broadband connections).

1 Like

Like @fmoledina I was testing with pictures.
I use Europe-West Satellite, maybe that one has some really slow nodes.

Sometimes files show only some bytes/s until they fail, or they don’t fail and block the rest of the files.

grafik
The two images are like that for multiple minutes.

Hello,

i also have problems to upload files with rclone. With small files like 100 oder 200 kb, everything works fine but with larger files i get the same error like tylkomat

“Failed to copy: uplink: segment error: ecclient error: successful puts (63) less than success threshold (80)”

…and then the upload traffic goes down. I use Windows 10 Pro with rclone v1.52.0.

What can i do?

1 Like

The developer team is working on it. While we are waiting for a fix there are a few things you can do to avoid that bug.

  1. --transfers 1 Default is 4 and by reducing it to 1 transfer the transfer itself can run 4 times faster which makes it less likely that the transfer gets too slow.
  2. Try to avoid using QoS. Instead limit the maximum speed with --bwlimit. That seems to work better and allowing even lower transfer speeds without running into the error.
5 Likes

3 posts were split to a new topic: I have experienced a large number of orphaned tcp connections, timeouts, and syslog errors with rclone

I confirm that restic via rclone is working fine.

i am using it with a small backup (1 GB daily) and it is working perfectly fine

1 Like

Tried RClone v1.58.0 + storj (native integration). Here is my feedback:

  • Upload is not stable, multiple errors (successful puts less than success/repair threshold), unexpected speed drops (gradually dropping from 8 MiB/s to 0 B/s). Additionally, RClone seems to hate multiple small files as the error rate would increase a lot. This can be mitigated by zipping those small files into an archive.
  • Download is good. Fast. No errors.
2 Likes