Question on Hashbackup

@hashbackup
Can I ask you a question regarding Hashbackup?

Is it possible to backup from a remote storage storage path to a local storage? If yes, what would be the best way (Linux) ?

Hi - no sorry, it doesn’t do that today, only to cloud storage, not from. -Jim

I’ll take that back, sort of. If you can get rclone mount to mount your remote storage as a local filesystem (this requires FUSE), you could use HashBackup to backup the mounted filesystem.

I just tried it here for the first time and it worked, sort of. It was extremely slow. The rclone mount --help screen is absolutely huge, so maybe there are lots of opportunities to optimize it.

Here is a session backing up my Storj bucket hbtest. The destination is also within this bucket, which is really weird! If you just want a local copy, no dest.conf file. If you want to backup to some other storage server, use a dest.conf (this is where you want the backup sent).

[root@hbtest ~]# uplink ls sj:/hbtest/
OBJ 2021-10-09 00:12:43     70204944 big70d
OBJ 2021-10-04 17:36:31     70204944 big70a
OBJ 2021-09-18 18:11:55       245011 titleapp.pdf
OBJ 2021-10-10 14:38:45     10485760 test10
OBJ 2021-10-04 17:36:57     70204944 big70b
OBJ 2021-10-04 17:37:44     70204944 big70c
PRE site/
PRE s3dir/
[root@hbtest ~]# rclone mount sjs3:hbtest mnt
^C^Z
[1]+  Stopped                 rclone mount sjs3:hbtest mnt
[root@hbtest ~]# kill %1

[1]+  Stopped                 rclone mount sjs3:hbtest mnt
[root@hbtest ~]# rclone mount sjs3:hbtest mnt --daemon
[1]+  Exit 130                rclone mount sjs3:hbtest mnt
[root@hbtest ~]# 
[root@hbtest ~]# ps auxw|grep rclone
root     11681  0.3  4.3 746436 21632 ?        Ssl  18:30   0:00 rclone mount sjs3:hbtest mnt --daemon
root     11697  0.0  0.1 112812   976 pts/0    R+   18:31   0:00 grep --color=auto rclone
[root@hbtest ~]# ls -l mnt
total 284720
-rw-r--r-- 1 root root 70204944 Oct  4 17:36 big70a
-rw-r--r-- 1 root root 70204944 Oct  4 17:36 big70b
-rw-r--r-- 1 root root 70204944 Oct  4 17:37 big70c
-rw-r--r-- 1 root root 70204944 Oct  9 00:12 big70d
drwxr-xr-x 1 root root        0 Oct 14 18:31 s3dir
drwxr-xr-x 1 root root        0 Oct 14 18:31 site
-rw-r--r-- 1 root root 10485760 Oct 10 14:38 test10
-rw-r--r-- 1 root root   245011 Sep 18 18:11 titleapp.pdf
[root@hbtest ~]# hb config -c hb dedup-mem 128K
HashBackup #2569 Copyright 2009-2021 HashBackup, LLC
Backup directory: /root/hb
Current config version: 0

Set dedup-mem to 128K (was 0) for future backups
[root@hbtest ~]# hb backup -c hb mnt
HashBackup #2569 Copyright 2009-2021 HashBackup, LLC
Backup directory: /root/hb
Backup start: 2021-10-14 18:32:10
Using destinations in dest.conf
Warning: destination is disabled: s3
Warning: destination is disabled: b2
Warning: destination is disabled: gs
This is backup version: 0
Shrinking dedup table
Sizing backup for dedup
/
/root
/root/hb
/root/hb/inex.conf
/root/mnt
/root/mnt/big70a
/root/mnt/big70b
^C
[root@hbtest ~]# hb backup -c hb mnt -p1
HashBackup #2569 Copyright 2009-2021 HashBackup, LLC
Backup directory: /root/hb
Backup start: 2021-10-14 18:34:27
Using destinations in dest.conf
Warning: destination is disabled: s3
Warning: destination is disabled: b2
Warning: destination is disabled: gs
Removed arc.0.1 from sjs3
This is backup version: 1
Sizing backup for dedup
Updating dedup information
/
/root
/root/hb
/root/mnt/big70a
/root/mnt/big70b
/root/mnt/big70c
/root/mnt/big70d
/root/mnt/s3dir
/root/mnt/s3dir/DESTID
/root/mnt/s3dir/arc.0.0
/root/mnt/s3dir/arc.0.1
/root/mnt/s3dir/dest.db
/root/mnt/s3dir/hb-113058.tmp
/root/mnt/s3dir/hb-147834.tmp
/root/mnt/s3dir/hb-204460.tmp
/root/mnt/s3dir/hb-214072.tmp
/root/mnt/s3dir/hb-230227.tmp
/root/mnt/s3dir/hb-236551.tmp
/root/mnt/s3dir/hb-27782.tmp
/root/mnt/s3dir/hb-341387.tmp
/root/mnt/s3dir/hb-358462.tmp
/root/mnt/s3dir/hb-377013.tmp
/root/mnt/s3dir/hb-393804.tmp
/root/mnt/s3dir/hb-414755.tmp
/root/mnt/s3dir/hb-472185.tmp
/root/mnt/s3dir/hb-477907.tmp
/root/mnt/s3dir/hb-488528.tmp
/root/mnt/s3dir/hb-516943.tmp
/root/mnt/s3dir/hb-537786.tmp
/root/mnt/s3dir/hb-563589.tmp
/root/mnt/s3dir/hb-572807.tmp
/root/mnt/s3dir/hb-616212.tmp
/root/mnt/s3dir/hb-619702.tmp
/root/mnt/s3dir/hb-659876.tmp
/root/mnt/s3dir/hb-671086.tmp
/root/mnt/s3dir/hb-693566.tmp
/root/mnt/s3dir/hb-752701.tmp
/root/mnt/s3dir/hb-796510.tmp
/root/mnt/s3dir/hb-799255.tmp
/root/mnt/s3dir/hb-807371.tmp
/root/mnt/s3dir/hb-812864.tmp
/root/mnt/s3dir/hb-869443.tmp
/root/mnt/s3dir/hb-871975.tmp
/root/mnt/s3dir/hb-872085.tmp
/root/mnt/s3dir/hb-893977.tmp
/root/mnt/s3dir/hb-920615.tmp
/root/mnt/s3dir/hb-934676.tmp
/root/mnt/s3dir/hb-96048.tmp
/root/mnt/s3dir/hb-988833.tmp
/root/mnt/s3dir/hb.db.0
/root/mnt/site
/root/mnt/site/index.html
/root/mnt/test10
/root/mnt/titleapp.pdf
/root/mnt
Copied arc.1.0 to sjs3 (62 MB 5s 12 MB/s)
Waiting for destinations: sjs3
Copied arc.1.1 to sjs3 (27 MB 5s 4.6 MB/s)
Writing hb.db.0
Copied hb.db.0 to sjs3 (54 KB 2s 24 KB/s)
Copied dest.db to sjs3 (36 KB 2s 15 KB/s)

Time: 717.5s, 11m 57s
CPU:  68.4s, 1m 8s, 9%
Wait: 10.5s
Mem:  79 MB
Checked: 56 paths, 363040734 bytes, 363 MB
Saved: 55 paths, 363040630 bytes, 363 MB
Excluded: 0
Dupbytes: 272971363, 272 MB, 75%
Compression: 75%, 4.0:1
Efficiency: 3.80 MB reduced/cpusec
Space: +90 MB, 152 MB total
No errors
[root@hbtest ~]# 

I experimented with some rclone options and got the backup time down to 140s with:

[root@hbtest ~]# rclone mount sjs3:hbtest mnt --daemon --vfs-read-chunk-size 4m --read-only --poll-interval 0 --attr-timeout 24h --dir-cache-time 24h

A subsequent (incremental) backup:

[root@hbtest ~]# hb backup -c hb mnt -p1
HashBackup #2569 Copyright 2009-2021 HashBackup, LLC
Backup directory: /root/hb
Backup start: 2021-10-14 20:19:00
This is backup version: 1
Dedup enabled, 7% of current size, 7% of max size
/
/root
/root/hb

Time: 0.1s
CPU:  0.1s, 99%
Mem:  59 MB
Checked: 56 paths, 363040734 bytes, 363 MB
Saved: 3 paths, 0 bytes, 0 bytes
Excluded: 0
No errors

Caching the directory and attributes for 24h may not work if you are planning to leave rclone running, modifying the remote data, and then doing incremental backups.

Thanks for the info.
In my case the remote storage would be another server and I thought maybe to mount the remote folder with SSHFS locally and then see if Hashbackup would accept that as ‘local’ source.

Destination would be purely local. The idea is basically just to reverse the backup flow from local backup → remote to remote backup → local. Of course it would be neat if I could put simply something like user@remotehost:/sourcedir as local source into the Hashbackup backup command and it would take that and ssh to the remote location.