It will remove the ability to restore data on case of a disaster, like a bug with a server-side copy. If there would be no possibility to fix a bug, we likely would be doomed.
In case of a TTL in the piece header specified by the customer there is no risk, because itâs cryptographically signed and thus there must be no way to restore it.
Of course not. They can only re-upload it with a different TTL. All objects storages works like this, there is no modification, your client always will download the object, allow it to modify, then upload back.
I see. Thatâs too bad. I knew you cannot modify the file itself, like appending content or so. But that you have to download and re-upload it only to change the TTL, thatâs bad. So it is technically a new file then with different hash and everything?
No, for one of the reasons I linked in my post aboveâdirect deleters conflict with a performant implementation of server-side copying, which is a feature desired by customers.
AWS S3 allows changes to TTL for existing objects, though this is because TTLs arenât tied to objects, but instead defined by rules called lifecycle configuration stored separately.
Wow. So I wasnât entirely wrong.
So yes, if it would be possible to change the TTL of a file (or add a TTL to a file that does not have one yet), the deletion of that file maybe could be done by the node without waiting for the bloomfilter.
And there is a workaround to avoid full reuploads when you want to have a new version of an object: if the original object was uploaded with multipart uploads, you can tell S3 to use some of the existing parts and only upload the parts you want changed. This would probably nicely map to our segments/piecesâthese are immutable, but could be part of different files.
This is the table you want to refer to: S3 Compatibility - Storj Docs. The relevant missing calls are PutObjectAcl, PutObjectRetention, UploadPartCopy, PutBucketLifecycleConfiguration.
I got a response from the team regarding your idea from a technical side (not excluding my response about needed possibility to restore from the trash):
Thanks.
Yes it was clear to me that it would not completely eliminate the need of GC because of pieces that could be left behind when nodes cannot fulfill the request for example if they are offline. But maybe to get more nodes to delete data themselves instead of relying on the bloomfilter alone. Basically we are doing this already were files have a fixed TTL.
But maybe if a feature to change existing TTL would already exist or being worked on, which I believe be a benefit for the customer, then deletion performed by reducing the TTL of a file might be an option to think about.