I have 25TB that i want to share, according to your documentation you want remove 10% of that in the event of over allocation but is really 2.5TB necessery on a big node like this?
It is your chose, I would setup this in the beginning and when it go full, then whatch real used space, then add some space more. As as i observed it is decimal mesurment.
for 4 tb i use 3.9 TB and it works good in real life it stay something 70GB free.
It is up to you, but consider this:
- Filesystems work better (lower fragmentation) when there is enough free space, at least 10%.
- Some problem in configuration or a bug in the node software could make it use more space than configured and you do not want to end up with zero free space on the drive.
Here’s how I do it. I have a zfs pool in which I create a zvol and the node runs inside a VM, using the zvol as a virtual disk. There is no need to create a large virtual disk initially, so I created a small one and expanded it as it filled up, always leaving at least 10% free inside the virtual disk. When my pool started filling up (the recommended free space for zfs is 20% AFAIK), I bought more drives and expanded the pool.
Why did I do it like this? It is easier to expand a virtual disk than to shrink it and I wanted to use the pool for other stuff, not just Storj. When I see that Storj is running out of (configured) space, I check if I have enough free space in my pool and expand the virtual disk by 3TB or so.
It is irrelevant, check back next year when you have issue