Frigate NVR Sync - Optimize Storj TTL

Hi! I recently read that it is possible to set a TTL to keep files only a set period of time, so they would then go straight into the SNO trash without waiting for the GC. Is it already possible to set this?

I use rclone sync in cron.

I am also a SNO as well as a customer. Keeping the network clean is also my duty!

Yes, it’s possible, but you need to use either uplink CLI or specify S3 headers.

So, I’m not sure that it’s possible to specify using rclone. Perhaps only to specify it in the access grant.

2 Likes

My mileage with rclone has been limited, the native uplink command is much better.

I believe you can set the access grant that you use, to specify the TTL - so that all files uploaded with that access grant will expire in say 7days, 1 month etc.

From a Frigate point of view, the best option is to hook into the MQTT broker, using something like Mosquito, then hooking that up to Node-Red.

It’s then really easy to watch for certain cameras, or object detection, and fire a native uplink session to upload the object to Storj - Node-red also allows far more advanced object control and manipulation if you want to send emails, or push notifications, switch lights on etc…

I’m sure you are aware, but all the object ID’s used in frigate are stored in a local database, so while syncing the media folder is a good backup - trying to restore from that I’ve never had much luck.

CP

1 Like

You may try to use a TTL limited access grant (or S3 creds based on it) for sync sessions.

1 Like

I sync media and DB.
The recordings are always in a hdd. Backup is for added security.
After numerous tests, it is enough to do a reverse “Rclone copy” to have everything working perfectly.

1 Like

rclone copy storj-tree.png storj:my-bucket --header-upload “x-amz-meta-object-expires:+5m”

It should be fine!

1 Like

Wow! Thank you! I didn’t know, that you may submit an additional headers like this!

But I believe that it works only if you would use an S3 integration.

If it is configured as S3 compatible, yes, no problem!
I will also try out the native rclone integration out of curiosity.

I take the opportunity to ask a question: At what point is s3 object versioning? I would like to have only 3 versions, with each version retain at 30 days (noncurrentversion equivalent of aws s3)

The native is usually better if you have more than 1Gbit for the upstream.

What about right now?

Yes, but what I am looking for seems not yet to be implemented.

And what’s are you searching for?
Immutable storage, like this:

or

1 Like

Lifecycle rules for versioned objects:

I want to:

  • Keep the current version indefinitely :white_check_mark:
  • If it is modified I want to keep the versions :white_check_mark:
  • Keep only the last 3 non current version
  • Eliminare le versioni non correnti dopo 30 giorni

Nope, I’ve been following the roadmap for a year, and was hoping for the recent sprint

You always can delete versions. Seems I do not get a problem…

I would have liked an automatic way!
The most powerful weapon against ransomware is zfs snapshot + s3 versioning