Synology NAS bakup on Tardigrade

How did it work out?

I just set up a simple Hyper Backup task in no time on my Synology. Works like a charm so far. Really easy :slight_smile:

4 Likes

Hi,

I’ve made some tests with few files to bakcup (S3 protocol). It was easy to configure and works fine for me.

I will continue the tests for few days.

Hogion

2 Likes

Hi,

I have succeded to restore the backup with the Synology software “Hyper Backup Explorer”.

Easy :

  1. Dowload the “.hbk” directory from Storj to your local computer (I’ve used WinSCP)
  2. Lauch Hyper Backup Explorer and open the “.bkpi” file
  3. Explore the backup

I have also tested the “Backup Explorer” in the “Hyper Backup” package of the NAS. It works fine.

There is so two ways to retore a backup.

Hogion

1 Like

Hello everyone,

I am new here and my goal is to backup my Synology NAS (DS920+ with DSM7 if that matters) on Storj.
For now, my plan is not to provide storage to the network.

I have successfully set up a backup in HyperBackup through the Storj S3 connector, I can do the backups, I can retrieve data, I can check the data integrity, etc.
For now, I am using the free 150 GB offered by Storj as a test. (I am trying to choose between Storj and Coldstack…)

Two things are bothering me though, I hope someone will be able to give me some help.

1/ Upload speeds
I have an FTTH connection (1gb down ; 700mb up).

If I upload via Cyberduck (plain files from my laptop), my upload speed is between 50 MB/s and 85 MB/s.
Through Synology CloudSync (plain files from the NAS), it is around 30 MB/s on large files.
Through Duplicati in a Docker container on the NAS (splitted files by Duplicati), it is around 30-40 MB/s.

And though HyperBackup, using the gateway.eu1.storjshare.io as S3 gateway (splitted files by HyperBackup), the upload speed is only of 15 MB/s.

What can explain such a loss of performance when using HyperBackup and how can I improve it ?

2/ Differences between dashboard metrics and billing
The egress values are almost identical between the one displayed on the dashboard and the one displayed on the billing tab.

However, things are completely different regarding the storage used :
Dashboard : 136.06 GB
Billing : 20.26 GB-month

Is there some kind of bugs or am I not reading the data correctly ?

Thanks!

When setting up the backup task, set the part size to 64MB. That might help. This aligns the part size with the segment size that Storj uses.

GB and GB Month are different units. It’s similar to how watts and kilowatt hours work. 1 GB stored for a month would be 1 GB Month. But 1GB stored for half a month would be only 0.5 GB Month

Your 136.06 GB is stored for less than a month, so the GB Month value is lower.

@BrightSilence Thank you for your answer, but I already set up the MultiPart to 64 MB, so that behavior is really strange for me.

I did some tests with Veeam, and results are amazing.
Tricky part is that you have to make S3 Gateway https as it is http by default.
Just generate certificates and place it in APPDATA\storj\minioi\certs and restart gateway.exe.

1 Like

Hi all, I too have now set up my Synology NAS to automatically backup incrementally daily to the Storj DCS via the Synology app Hyperbackup using the S3 interface.
It works wonderfully!

I created an access grant that Synology uses to store the data in a bucket.

The bucket has its own bucket password. Now when I go into the bucket via webinterface and use the bucket password, I see all the data that I had put there via the webinterface using the bucket password. But I don’t see the data that Synology puts there using the access grant.
Only when I enter the bucket in the weninterface using the access grant password, I see the data.

Do I not understand the bucket as such? So is it relevant with which passoword/key I enter the bucket? What do you need buckets for then?

This is a backup database consisting of many individual chunk files. Is there a function in the webinterface that allows me to download all this data collected at once to my local PC. It is impossible to download each of the 1000 subfiles individually.

You must use exactly the same encryption phrase used during upload to see your objects. With a different encryption phrase your objects cannot be decrypted and your buckets will look like empty.
Moreover - you can use a different encryption phrase for every single object, but you can see them and download back only using “their” encryption phrase.

To download several objects you can use uplink CLI or rclone. If you prefer GUI, you can use FileZilla or Cyberduck or S3 Browser.

hmmmm…disturbing. I’d prefer seeing the crypted file being there but not being able to be decrypted due to wrong passphrase. But, okay, I understand. Thanks for confirming my already suspection.
Is the said only valid for the webinterface? Can I see ALL objects with one of your menthioned s3 browsers? Or do the also only show the files that are crypted with a specific key?

For what purpose then there is the posibility to create more then one bucket. As I unterstand so far I only need ONE bucket to start and then can put as many objects in “virtual subbuckets” using different passphrases (or access grants). I don’t get my m ind around this exactly.

But thanks Alexey for listing some S3-Browser apps! Whole new world for me.

It really depends on the use case. If you are a single user or maybe admin of a bucket, I agree.
But imagine if you have independent different users with different passwords all in one bucket. Then it would be a mess if everyone could see the other users files, even if encrypted. I think hiding resp. not displaying them is the only viable choice then.

We had a nice little discussion about that recently:

1 Like

The encryption phrase must be the same in all variations of access grants for your project, if you want to see uploaded objects in any program/tool used these access grants.
Just think about it as a private key - without a private key you cannot access your data.
A different encryption phrase can decrypt only objects, uploaded with it.