Rename the generated files to public.crt and private.key.
Copy these two files to the gateway’s certificate directory. It should be ~/.local/share/storj/gateway/minio/certs/
3.1. You can change minio configuration directory using the --minio.dir flag with the gateway run command.
Run again the gateway and now it will be accessible via HTTPS instead of HTTP. No additional flags required.
At the moment you can do it with duplicati: https://documentation.tardigrade.io/how-tos/backup-with-duplicati
The needed functions of S3 protocol for the native Synology backup tool is not publicly available yet. The public hosted S3 gateway is an active development, so stay tuned!
Hello to all,
have there been any updates on this topic? If I understand it correctly, the S3 Gateway has received some major updates recently. Can Synology Users make an “offsite Backup” to Storj’s DCS?
Thanks!
I’m not near a system with access to my Synology ATM. But it’s basically just creating bucket and an access grant in the satellite web UI and at the end of the process clicking the option to get gateway credentials. That should give you the url key and secret. On Synology you pick S3 storage and select the custom S3 setup. Fill in the info that was supplied in the satellite UI. If it asks about part size pick 64MB if possible. That should do it. If you get stuck somewhere please share a screenshot of what’s not working.
Many thanks! I really appreciate you took the time to write a short description. And I am glad the compatibility is now there. I understood your steps!
I am new here and my goal is to backup my Synology NAS (DS920+ with DSM7 if that matters) on Storj.
For now, my plan is not to provide storage to the network.
I have successfully set up a backup in HyperBackup through the Storj S3 connector, I can do the backups, I can retrieve data, I can check the data integrity, etc.
For now, I am using the free 150 GB offered by Storj as a test. (I am trying to choose between Storj and Coldstack…)
Two things are bothering me though, I hope someone will be able to give me some help.
1/ Upload speeds
I have an FTTH connection (1gb down ; 700mb up).
If I upload via Cyberduck (plain files from my laptop), my upload speed is between 50 MB/s and 85 MB/s.
Through Synology CloudSync (plain files from the NAS), it is around 30 MB/s on large files.
Through Duplicati in a Docker container on the NAS (splitted files by Duplicati), it is around 30-40 MB/s.
And though HyperBackup, using the gateway.eu1.storjshare.io as S3 gateway (splitted files by HyperBackup), the upload speed is only of 15 MB/s.
What can explain such a loss of performance when using HyperBackup and how can I improve it ?
2/ Differences between dashboard metrics and billing
The egress values are almost identical between the one displayed on the dashboard and the one displayed on the billing tab.
However, things are completely different regarding the storage used :
Dashboard : 136.06 GB
Billing : 20.26 GB-month
Is there some kind of bugs or am I not reading the data correctly ?
When setting up the backup task, set the part size to 64MB. That might help. This aligns the part size with the segment size that Storj uses.
GB and GB Month are different units. It’s similar to how watts and kilowatt hours work. 1 GB stored for a month would be 1 GB Month. But 1GB stored for half a month would be only 0.5 GB Month
Your 136.06 GB is stored for less than a month, so the GB Month value is lower.
I did some tests with Veeam, and results are amazing.
Tricky part is that you have to make S3 Gateway https as it is http by default.
Just generate certificates and place it in APPDATA\storj\minioi\certs and restart gateway.exe.
Hi all, I too have now set up my Synology NAS to automatically backup incrementally daily to the Storj DCS via the Synology app Hyperbackup using the S3 interface.
It works wonderfully!
I created an access grant that Synology uses to store the data in a bucket.
The bucket has its own bucket password. Now when I go into the bucket via webinterface and use the bucket password, I see all the data that I had put there via the webinterface using the bucket password. But I don’t see the data that Synology puts there using the access grant.
Only when I enter the bucket in the weninterface using the access grant password, I see the data.
Do I not understand the bucket as such? So is it relevant with which passoword/key I enter the bucket? What do you need buckets for then?
This is a backup database consisting of many individual chunk files. Is there a function in the webinterface that allows me to download all this data collected at once to my local PC. It is impossible to download each of the 1000 subfiles individually.
You must use exactly the same encryption phrase used during upload to see your objects. With a different encryption phrase your objects cannot be decrypted and your buckets will look like empty.
Moreover - you can use a different encryption phrase for every single object, but you can see them and download back only using “their” encryption phrase.