Mechanism for backing up a bucket / downloading zipped bucket locally?

We’ve just experienced a snafu where data on a mounted drive was delete by a client NOT realising that deleting it there, deleted it everywhere…

Which opens up the question of backing up our backups i.e. a way of retaining a copy of all data periodically.

A couple of thoughts occurred:

  1. Backing up using a separate Storj bucket - is there a way we can clone an existing bucket periodically, so that we have an alternate version to go back to (obviously, appreciate this multiplies the costs involved, probably significantly due to the one time bandwidth for all files + the new storage requirement)

  2. Grabbing a zipped version of a bucket for storage elsewhere WITHOUT needing an intermediary server that has sufficient storage space available for the zip to be written to first

  3. Anything else people can think of that can avoid this situation in the future…!

Thanks in advance!

Hi @believe,

I’m going to be rather unhelpful and not answer any of your questions but ask some more…

How are you backing up to the bucket at the moment? Could you set this to backup to run jobs on different timescales to 2 different buckets?

Could you separate/stop user access to the current backup bucket?

Ah, yes - interesting idea

So basically have separate ‘off’ and ‘on’ backup buckets - and only allow user access to the ‘on’ bucket at any time. Will have to have a think about this

We’re using rclone for mounting the drive at the moment

I suppose another option is to provide a readonly bucket access for client to access, while having a separate write bucket for actually capturing the files. Does anyone know if this is possible with two different directories on the same server?

Doesn’t answer your questions: A better way of backing up thinkgs might be to have a local backup directory that duplicati backs up to storj using versioning. That way you’ll never lose your data if someone deletes everything.

Thanks for the suggestion - one of the issues here is that our Storj Bucket is already many times bigger than the server storage space available (hence not being in a position to be able to just zip the remote drive locally from time to time)

We used to be on Amazon S3 but moved away as a cost-saving exercise to Storj - which really worked, with the exception of this particular issue :confused:

So any solution needs to either be Storj → Storj or Storj → local external storage/another 3rd party storage solution without having to go via our server

Something like this should be possible. Create an access token for the user with only read permissions and then a separate access token for the actual rclone software, both looking at the same bucket.

1 Like

You can use a different access grants for the same bucket. One - with read only permissions and setup a client’s connection with it, the second with write and delete permissions to make backups.

The cloning a bucket without download and write is not possible at the moment, you actually need a server-side copy feature. It’s in backlog, but not implemented yet.
The versioning is not implemented too.
So either you would copy the bucket yourself or use different access grants to provide an access.

If you want to have versions of your backup - then I would recommend to use duplicati.

1 Like

Thanks @Alexey - nice to know these are in the roadmap! Will look in to your suggestions

I don’t know enough about the technicalities of it to know if this could even be possible, but a ‘download bucket’ link from the dashboard that allowed a zip of the entire bucket to be downloaded directly would be awesome!

Could you please create a feature request there: DCS feature requests - voting - Storj Community Forum (official)