Access management for teams using Duplicati and Tardigrade

I’m trying to get my head around how to think about keys and permissions for my backup infrastructure with Tardigrade and Duplicati

I have a small team, each with a workstation. I want to have backups go to our tardigrade project where

  • Each client can backup to a destination that Cleary identifies it’s footprint in the tardigrade project (size + objects, etc)
  • Each client’s setup is similar enough that I can kind of script it (predictable config)
  • A new client machine could restore from an existing old client backup
  • A root/admin user/client could restore from any backup.

I’m really digging into how to organize apiKeys and capabilities

For workstations A, B, C would I do something like this:

  • Make buckets: backup_a, backup_b, backup_c
  • Make apiKeys ak_a, ak_b, ak_c
  • Set a permission on each key w/ restrictions to individual buckets
  • Make apiKey ak_admin with all permissions
  • Use a common encryption phrase on alll clients so future restores don’t have to know which key/phrase combo made the backup

Alternatively, I think I could have 1 api_key and use something like:
$ uplink share [some bucket]/[some folder]

What is the best practice here? This all feels sort of new to me and I don’t want to use the wrong mental model for key management and permissions.

Is there a good white paper to that would help me roll prep to roll out for a small team?

You can use a different buckets approach with a one project, but it will be too hard to distinguish the bandwidth usage and count of objects usage in the billing.
Of course you can use the overview dashboard to see those usage.
If you ok with it, then you can create a three buckets, and share each of them with a specific team. You do not need to have a different API keys, because every share will generate a derived API key and derived encryption key based on path and share options and wrapped into serialized access grant, they are independent of each other. Each team should import their access grant to the s3 gateway or to the application (depending on setup).
You will have an ability to permanently revoke an access in any time without compromise your encryption phrase or root API key. However, you can use a different API keys too if it would be simpler.

If you want to have a separated billing, then you need a separate projects. You can request increase number of projects and setup each project for each team.

Thank you for your question. I am currently working on a howto video. Why would you use access grants if the setup with an API key is much easier? You answered that question. I will include a few words in my demo.

I believe Alexey answered the key points already. Let me repeat it in short. You have 2 options:

  1. Use one project but different buckets.
  2. Use multiple projects.

Advantage for 2 is that you would get an invoicde that includes the costs for each project. You can just take the invoice and work from there. With 1 you would have to log in into your tardigrade account to get an overview about the costs per bucket. Best place for that would be the advanced report on the billing page but currently that report is not looking good. We need to fix the report somewhere in the future.

Advantage for 1 is an easy setup. You could run all instances with the same API key. For number 2 you would need at least 1 API key per project. That could get annoying to keep track about all the different API keys. This advantage is the same even if you want to use access grants. Same deal. You need an additional command to generate an access grant. It would still be annoying for solution 2.

Now the question is which tradeoff is better for you? Better invoice vs easy setup?

Thank you so much.

Listing options as I understand them:

  1. Multiple projects/common bucket/common backup path -> super explicit access and accounting/complicated “roll up” view for admin
  2. One project/multiple buckets/common backup path -> still explicit access, bucket level usage accounting, invoice details for bandwidth especially may be tricky
  3. One project/one bucket/multiple backup paths -> no one recommended this :slight_smile: no good accounting options for individual users

I think I like Option 2. We’re one team and I’m not actually invoicing anyone (common payer in this case), but I’d like insight to how big the storage footprint is for each user/workstation. I do want to restrict User A from accessing Bucket B, so … I think the steps would be as follows:

  1. Make bucket_a and bucket_b (BTW, underscores & dashes in bucket names would be nice)
  2. Make 1 api_key for admin that has access to all
  3. Make access grants for each bucket with uplink share sj://bucket_a/backup and uplink share sj://bucket_b/backup -> the advantage here is explicit revokable access.
  4. Use access grants appropriately with each Duplicati configuration
  5. Admin with project api_key can restore from any bucket

As a result, we have:

  • Client A has access only to Bucket A
  • Client B has access only to Bucket B
  • Admin has access to all
  • Admin can see simple storage info by bucket size in tardigrade project dashboard.
1 Like

Sounds about right :slight_smile:

I hope the video I am working on will explain a bit more which duplicati options might be usefull for you.

1 Like