Problems with initial migration from B2 & hyperbackup

I’ve been running Hyperbackup from Synology NAS to Storj for, well, since I first launched a node over 2 years ago.

I have never had any issues with the backup, except for dealing with the “passphrase” on buckets. Back in the day, that passphrase could show up in multiple places. I think that they have since simplified it, unless you want to go off into the weeds on your own. Every bucket, folder, file can have it’s own Passphrase and if you do not use the correct passphrase inside the bucket you won’t see those items using a different passphrase.

Unless you absolutely need to use different encryption passphrase for everything, don’t. Keep it simple.

Make a separate bucket for each backup source (IMO) to organize it. In your backups project, make a bucket for your synonas. Use the same encryption passphrase on this bucket as your project. Make your S3 credentials, enter them into Hyperbackup. Keep it simple.

Document. Document. Document.

Hello @Curmudgeon1,
Welcome to the forum!

Oh, you used a LastPass. It may substitute your account’s password instead of an encryption phrase. I would suggest to disable the automatic substitution in LastPass for the satellite UI site.

did you add a second Storj S3 remote to rclone but with S3 credentials for HyperBackup?
Sorry, but from your answers it’s still not clear.

You may also try to do a reverse - use Storj S3 credentials from rclone in the HyperBackup app. This will exclude the confusion about encryption phrase, and can confirm that problem is with HyperBackup, not encryption or permissions.

1 Like

UPDATE:

I do understand the focus on passphrases, people, I really do. I’m sure that passphrases are a common problem with new users of the Storj dashboard interface. But, this ain’t my first rodeo and there is/was one, and only one, Storj passphrase involved here and it is the passphrase that was used throughout, so i am 95% confident that my Storj passphrase was not the problem. (and no, it’s not lastpass, either).

So, that led me to think that the issue is with B2 credentials, the Hyperbackup format itself, or something else. Responses here indicate that the Hyperbackup format works in Storj for some users, so I rule that out for now. So, I decided to start over from square one. I removed all Storj passphrases, access grants, S3 keys and buckets, and started from the beginning with a new passphrase, new access grant, new S3 keys for Storj. I also ignored the migration guide’s direction to create new backblaze access keys and used the backblaze S3 keys from the original Hyperbackup backup in my rclone config for the cloning.

So, all that being done, I ran the rclone and attempted to relink the .hbk file to Hyperbackup. It did see the directory and relink was successful, integrity verfied and all version history is intact.

I am 99% positive that the problem was using new backblaze credentials instead of the same S3 credentials that were used to do the original backup to backblaze. The migration guide tells you to create NEW access credentials in Backblaze and use those in rclone config. That would probably be fine for general data files, but it DOES NOT WORK for relinking Hyperbackup (.hbk) volumes. I suspect this is related to the Synology faq re: the “owner” of the hyperback file needing to be the same. So, while a new access credential created in Backblaze gave me full access and control of the bucket and files therein, allowing me to both clone and “see” the .hbk file in Storj, it did not allow me to decrypt the prefixes (“folders”) in the bucket itself b/c it doesn’t recognized the new credentials as the “owner” from the Hyperbackup perspective.

I’d suggest that the Storj Backblaze migration guide might need to be revised to include this critical information. The guide will currently work for general data file cloning, but does not appear to work for Hyperbackup .hbk relinking.

I am very grateful for the massive response and help from all! Now, I have many more backups to clone…

Cheers!

4 Likes

I’m speechless. On the other hand, I should not be surprised. When synology software people see an opportunity to faceplant – they embrace it with vengeance.

Isn’t it ironic, that Synology DiskStations, being positioned as “set it and forget it user friendly appliances”, require so much babysitting and walking on eggshells working around their nonsense. They are marketing company (and are very good at it), not technology company (they completely suck at it).

I too was duped into wasting 2 years of my life with these machines, under the false pretenses of getting a hands-off, “just works”, appliance, and after getting tired working around every their screwup, filing 15 bug reports, having engineering from Taiwan remotely debug on my nas for a month, and digging in their horrible, horrible source (hey, did you know they insert calls into opaque proprietary shim layer into the authentication path of the old version of OpenSSH they are shipping? how fun!) I jumped the ship few years ago. Can’t stand them. I’ve heard QNAP is not much better.

/rant.

2 Likes

You’re not wrong, but I find more to appreciate about them than I do to complain. I built and ran DIY NAS for several years, but that go old. Bought my first Synology and never looked back. QNAP is much, much worse. We’re running two Synos at my house and they’re used every day… a lot. Usually, they’re okay. Hyperbackup is a pretty decent backup app, but little crap like this really takes the fun out of it.

The strong side of HyperBackup is its ability to shuffle terabytes on data on a heavily underpowered devices. And UI is also quite good.

My beef with it is its corruption-prone design: If network fails mid-backup it can damage the datastore and end up quite badly. I’ve caused it to fail multiple times when testing backup solutions. Some indirect evidence of its fragility is how they handle cancellation: if you try to cancel in-progress backup, hyperbackup does something mysterious for the next 10 minutes (At least used to, up to DSM7). Why can’t it cancel right away? Network interruptions don’t give 10 min advance warning.

Generally, all synology software works if you use it in one straightforward way, (which I guess the way they happened to have tested). Slight deviation – it falls apart.

Oh well…

(I eventually migrated to TrueNAS Core, running on an old enterprise server I picked up at local electronics recycler. Zero maintenance, everything works)

1 Like

@arrogantrabbit I do not have Synology myself, but perhaps it’s possible to run restic or Duplicacy?
They works much better and you may configure the chunk size (Storj is prefer 64MiB or more).

Thanks, but I’m not looking for another backup application. Hyperbackup is not a bad app, imo. Even if it is somewhat difficult to see what’s happening under the hood, it works and it’s easy enough for my wife to manage if I’m not around. I’ve used Duplicacy and had problems with it. I’m not at all interested in a CLI solution (the wife, remember?). As for performance and stability, I’ve never had the issues that arrogantrabbit mentioned. afaik, hyperbackup resumes without problem after interruptions. I’ve been using Hyperbackup now for well over 5 years to backup numerous datasets from two different NAS and have never had a single significant problem with backup or restore. It ain’t broke, so I ain’t fixin’ it. :slight_smile:

2 Likes

Interesting. I have a ReadyNAS 4213, which has decent hardware but the software hasn’t ben updated in an out 5 years as Netgear have EOLed the whole product range.

Is it possible to run docker on a TrueNAS Core machine? Is it fairly “stupid-proof”?
If so, might consider installing that :slight_smile:

Yes, I even wrote a guide to do it without docker: Duplicacy Web on Synology Diskstation without Docker | Trinkets, Odds, and Ends

It must work for someone – otherwise they would not release it. I’m glad your usecase happened to be among those they tested :slight_smile:

That’s an absolute best policy, and I fully subscribe to it.

It looks like it’s possible to install TrueNAS, if you can add more ram (over 16GB ideally): Should I put FreeNAS on a ReadyNAS? | TrueNAS Community

No[t yet], Core is FreeBSD. They have jails. I have a script to run storagenode there GitHub - arrogantrabbit/freebsd_storj_installer: Installer script for Storj on FreeBSD

TrueNAS Scale is Linux based, it runs K8s, so you can do all the linuxy stuff there. But it’s a newer product and may not be as stable and/or performant as Core, albeit it’s cathching up. In this universe FreeBSD lost to Linux. (I guess you can say parts of it continue living and striving as macOS, but for general public Linux seems to be the path forward; that’s the reason iXSystems started TrueNAS Scale because new hardware support of FreeBSD is not as great as on Linux.

Previous paragraph notwithstanding, FreeBSD is stable and fool proof as an iceberg. I would not consider any other OS for the server, and especially storage server. Some of its features, such as separate independent boot environments and dtrace, that are slowly migrating to Linux, are cherries on top. Documentation is perfect, os organization logical, most software available either via packages or ports, and you can run linux software too if you really need to – via binary compatibility layer or in the VM.

So yes, I believe everyone shall replace their storage server software with TrueNAS Core right away and thus instantly improve their mental health.

Mine’s got 48GB so maybe I’ll have a play with it :slight_smile:
Thank you! :slight_smile:

2 Likes

They uses k3s instead (from Rancher).