Ok, first some terminology. A file is split up in 64MB segments, which is then erasure encoded to 80 pieces. Those pieces would be roughly 2.3MB in size. I skipped a few steps in between as they are not relevant to this discussion.
So there’s never anything like a 64MB piece.
With that out of the way, uploading a 4MB file would not result in further split ups in segments. You just end up with a 4MB segment, that is erasure encoded and split into 80 pieces of a size of roughly 0.14MB.
As you know, you’re paying for storage of the original file size (after encryption) and for egress bandwidth. What’s a little less known is that there is also a per object fee. This is mostly intended to discourage people from uploading tons of miniscule size files. It’s so low that for almost anyone it can simply be ignored.
Per Object Fee - Charged at $0.0000022 per file stored. This charge ensures that users are incentivized to store data that is larger that is optimized for storage on the network. Decentralized cloud storage is best suited to the storage of large static objects.
Tardigrade stores object metadata on the satellites. There is no file system block or anything. There technically aren’t even folders. Tardigrade handles paths as file prefixes. So you don’t ask it what files are in a folder, you technically ask it which objects have these prefixes. It works largely the same as folders with some exceptions. You can’t have any empty folders, since by definition there can’t be a prefix without an object.
So you’re technically paying a per object fee for the metadata overhead. But it’s negligible. The more important part is that you would have pretty bad performance with many small files compared to fewer large files. And this cost structure incentivizes you to avoid that situation.