Problems with initial migration from B2 & hyperbackup

My apologies if this is the wrong place to post this. I have submitted a support request, but thought the community might be able to assist. I also review the forum categories and this one seemed the best place. Please do redirect me if I’m wrong.

I have been using Backblaze B2 for years to backup my Synology NAS’s. I recently signed up to Storj and have a paid account. I initiailly created S3 credentials and ran a test backup with the free account from my nas with hyperbackup. It was fast, complete, and without error. I deleted that bucket and then upgraded my account.

I then followed this guide to the letter and ran rclone to sync one of my B2 buckets to Storj. All went well and rclone check... indicated no discrepancies. I ccan see the bucket in Storj and I can access it. All appears well.

However, when I attempt to “Relink to an existing task” with Hyperbackup, it connects to the bucket, but does not provide me with a “Directory” as it normally would . See the screencap:

.

I do need to be able to restore and reconnect to my existing hyperbackup files in order to preserve versioning, history, etc. Otherwise, I lose all of that and just start over with backup (NOT the preferred approach).

Does anyone have any idea what I’m missing here? Has anyone succesfully relinked to an existing hyperback task after upload/clone to Storj?

Many thanks for any assistance

Are you using the same credentials (access key, secret key, PASSPHRASE) that you used with B2?

Did you create the create the credentials while using the same PASSPHRASE that you used for B2 migration?

Edited to clarify terminology.

I followed this guide to the letter, so I created a new key on B2 that has fuill access to be used with rclone. It is not the same S3 key that was used to do the original backup. However, I can access, browse and see the bucket that has been rcloned to Storj.

Should I have used the same keys that were used to do the backup to B2? The original backup is not encrypted client-side, so there are no other credentials besides the S3 access credentials.

I think what they meant was, when you created storj s3 access credentials for rclone, and then for HyperBackup, have you used the same encryption passphrase?

Bucket on storj can contain objects encrypted with multiple different keys and only those encrypted with the one your credentials were generated with will show. If you use wrong keys the bucket may appear empty.

(I also would not rule out HyperBackup treating b2 and s3 endpoints differently so simply copying the data may not be enough. I don’t know)

1 Like

Again, referring to the Storj guide I linked to, I created Storj “access grant” credentials for use with rclone as instructed. These apparently use the same credentials as your account. Then I created a new application access key on Backblaze for rclone. Those are the credentials used to effect the clone from B2 to Storj. I also have Storj S3 credentials, which are used to access Storj from Hyperbackup. I can see the cloned .hbk hyperback file in Storj and I can drill down into in the same way i can with Backblaze.

There is no client-side encryption on this particular file. When connecting to Storj, I used my S3 credentials. They are different from the Storj Access Grant credential used with rclone. I can’t see how they could be used interchangeably.

I’ve also repeated this same rclone process using the same Backblaze B2 S3 api credentials that were used to create the original backup (rather than the ones I created using the guide). the result is the same. I can see the bucket in Storj and the .hbk file in teh bucket, and hyperbackup recognizes the bucket, but does not see any “directory”, despite the fact that the file is in Storj.

I’m beginning to think that the Storj migration process simply doesn’t work with the Hyperbackup format, but I’m still waiting to hear from support.

Not really. There is also a passphrase involved.

Did you use the same passphrase as when creating the grant? Literally copy paste? If you created it with different passphrase you will see the empty bucket.

They both allow to access the same data as long as you used the same passphrase.

Why would that matter? it seems you didn’t have any issues with accessing B2.

Confirm that you used the same passphrase for creating grants for rclone and s3 credentials for hyperbackup.

2 Likes

So, if i understand what you’re suggesting, i can manually change the passphrase when creating an access grant? And i should literally copy the existing paraphrase (secret key?) from the storj s3 credentials too the storj access grant?

I’ll take a look at that, but it certainly wasn’t apparent at first glance.

The UI may have changed since I last looked at it, but fundamentally storj requires encryption passphrase to access data. There is no way to upload data unencrypted.

When creating access grant the passphrase is incorporated into the grant. When creating s3 credentials the passphrase is stored on the s3 gateway encrypted with the s3 secret.

Bucket can contain multiple objects encrypted with multiple different passphrases. Only those encrypted with the matching passphrase will be accessible with the specific credentials, those that incorporate the correct passphrase.

2 Likes

You’ve just described exactly what I’ve already done; Created access grant and s3 credentials that already incorporate the account passphrase.

There are only three credentials associated with my new storj account:

  1. The account credentials and passphrase i use to sign in
  2. The s3 credentials i created to test access to the storj api.
  3. The access grant i created for rclone.

Both #2 and #3 incorporate the passphrase from #1.

This is what confuses me. What is “account passphrase” you are referring to? Account does not have a passphrase, it’s a property of credentials. you can have multiple objects in one account encrypted with different passprases.

Ah. That’s account password.

So you used account password as a passphrase (not the best idea, but will work).

To be clear there are three different things:

  • Account password (protects access to your account)
  • Passphrase (encrypts data you upload)
  • S3 secret (authenticates with S3 gateway and encrypts passphrase stored on the gateway)

These are three different unrelated things.

One last thing to try (unless this is what you mean in item 2 above)

Try to access the storage with Cyberduck using the same S3 credentials you gave to Hyperbackup.

If it sees the data, but hyper backup can’t adopt it – the problem is with hyper backup: evidently they use different data layout with B2 vs S3 and copying data is not enough .

If it does not see the data – you made a mistake somewhere copying the passphrase when creating the s3 credentials vs rclone grant.

1 Like

There are 2 passwords associated with a StorJ account. The first one is the regular account password you use to login with your email address. The second passphrase is for the encryption of the files themselves. This is what gets baked into the access grant and S3 credentials.



After creating the Access grant with a passphrase, that passphrase is cached in the browser session until you logout of your account or manually unload it. The next credentials created in the same session will use the same encryption passphrase automatically. If you create an S3 Key after logging back into the account again, it will ask for the encryption passphrase again, as in the last screenshot. This has to be exactly the same as the first time, otherwise you can’t access the same files/objects uploaded with the Access Grant.

2 Likes

This passphrase. Not my account password.

So you used account password as a passphrase (not the best idea, but will work).

No, I did not. I simply created the auto-generated credentials using the Access Keys function in the Storj dashboard (as instructed in the Storj guide for migrating from B2) . One was an S3 access key (full access) the other was an Access Grant. Both generate automated credentials. I see no method by which one can associate these credentials with anything other than your Storj account. I see no method by which an Access Grant credential can use another passphrase. They are auto-generated. There is no other encryption involved here. the .hbk file is not encrypted on the client-side and there is nowhere on Hyperbackup to enter an additional passphrase when connecting to an S3 server with the credentials created by Storj.

If it does not see the data – you made a mistake somewhere copying the passphrase when creating the s3 credentials vs rclone grant.

There was no copying of any passphrase. They were both autogenerated. You keep saying this, but you’re not being clear as to exactly WHICH passphrase is supposed to have been copied where when I created what seems to be an auto-generated S3 access key. The Access Grant key used for rclone doesn’t even offer an option for a passphrase. It is 100% autogenerated.

I appreciate you sticking with me, but if you’re telling me that a specific passphrase needs to be copied from one credential/place to another, please talk to me like I’m 5 years old. “Take the passphrase from ???, copy it from the ??? field and paste it into the ??? field…” because we seem to be going in circles here.

So it’s using the same passphrase then. I had a similar issue when I setup Restic. I had changed the endpoint URL to be https://gateway.storjshare.io/<bucket-name>/<backup-directory> and it worked. You might need to do the same thing here.

For you I would try to put the https://gateway.storjshare.io/media-nas-hyperback as the server address, and see if that helps.

You could test this another way by using the S3 credentials your using in the hyperbackup client in another rclone remote. Just setup a new S3 remote in rclone using the credentials from hyperbackup and do an lsd to see if you can see the directories there. If you can’t its a problem with the credentials/encryption passphrase. If you can see it, the problem is likely with the client configuration.

If you don’t know how to do that, I can walk you through the rclone config

1 Like

Thanks. Trying that this evening.

Didn’t have to create a new Backblaze S3 in rclone as i already had the existing ones in there. Yes, running rclone lsd backblaze:media-nas-hyperback I can see the .hbk file listed in the backblaze bucket. Using -R, I can also recurse into it and see the sub-dirs/files.

Same result with the Storj S3…

So, it seems that this is a Hyperbackup issue?

1 Like

Yes, this is exactly what I’ve been doing.

Sorry about that. The UI has changed since I last logged in, apparently. Now to generate credentials they use the passphrase you enter when “opening” the project and no longer ask it at the credential generation step, I guess to avoid mistakes like these.

@AussieNick’s suggestion is then the only other option to explore then, after confirming that you can see the data with those generated s3 credentials with other apps, like CyberDuck, before taking this evidence to Synology

1 Like

Changing the endpoint URL has no effect. Tried various ways with no joy.

Thanks for the followup mate and I truly do appreciate your effort. I’m starting to think this may be an issues with Hyperback and/or B2 + Hyperbackup.

Maybe your using the wrong synology account to login to the hyper backup? From the synology website

Notes:

  1. Only the owner of the .hbk file is allowed to relink the task. If you see the message “No directories available”, please use the account of the .hbk file owner to sign in to your Synology NAS. To learn more about how to view the owner of the file, refer to this article.
  2. For more information on creating backup tasks, please refer to this article.