Calculation the size of the deleted files is unnecessary (and painfull)

As I understand it, my idea would still reduce the number of additional IO operations, specifically on ext4 file systems, as quoted before:

My understanding is that the current implementation performs a stat call for every piece when moving it to the trash, and again when deleting it. If the metadata is not cached, which I believe is not the case in @arrogantrabbit’s setup, this could result in disk access for every call.

My idea is to utilize the file size retrieved during the retain process to calculate the prefix folder’s size and write it to a file. I assume that the size is retrieved during retain to update the used space and trash information.

For example, if retain process moves 30k pieces from a single prefix folder in blobs to a prefix folder in trash the current implementation would perform 30k additional reads and again 30k additional reads when performing deletion of the prefix folder. My proposal is to perform the inevitable 30k reads during the move, but also use the sizes already at this point to calculate the trash prefix folder’s size by summing the size numbers. Once the move is complete for the prefix folder, the total size of the trash prefix folder can be written to a file. Now when this folder gets deleted only one final read to retrieve its size to the file created earlier. This would give the total size of it that can be used to update the free space accordingly rather than performing 30k additional reads again.
In my view, this approach would spare 30k additional reads for a single prefix folder.