What should I put as my available disk space?

So when running the docker run command to start the node, I set -e STORAGE="14TB" \.

df -H reports 16T
df -h reports 15T
lsblk reports 14.6T
sudo fdisk -l | grep Disk reports 14.55TiB

Which is the proper value to use when wanting to offer an entire disk to Storj? I believe it would be the last one but I’m not 100% sure.

Storj displays and counts in TB instead of TiB, so the value to use would be 16TB.
16TB drives are 14.55TiB.
It is however recommended to leave some of that space unallocated. Official recommendation is 10% (which would make it 14.4TB), but you can probably go lower.

1 Like

I currently have my 16TB drive set to 14TB for Storj. So I can safely set it to 14.4TB to maximize available space?

1 Like

Just do not specify 16TB :slight_smile:
If you could run out of free space, your node may stop functioning and you cannot even start it back, especially in the case if we introduce a bug regarding calculating of free space and especially if you would disable a filewalker.
However, the safe would be to use results of df -h (it will give you TiB value), but specify it as TB in the storagenode config.

So given that I have a 16TB drive, what would you set it to?

as written above:

This will leave approximately 10% free space on your disk.
But I wouldn’t specify more than 14.4TB at the start. Your node would need a time to fill even that amount.

1 Like

Yeah, it’ll take a long time to fill this drive. It’s been running for almost 6 full days now and it at 145GB used.

So I’d be safe to set it to 14.4TB being that it is 10% less than 16TB? Sorry, I’m just trying to do this right and understand it all.

Correct, set it to 14.4TB for now which is the official 10% free recommendation.
Once it will be all occupied you will be able to check with df -H or with df -B 1000000000 (which displays it it GB - TB*1000) again to see how much free space there is and if it does correlate with what the node is reporting, then modify the value again.
I personally use df -B 1000000000 and leave around 100GB unallocated on a smaller nodes, but for a 16TB one I wouldn’t probably risk it unless you have a way to expand the partition in a case the node wouldn’t want to start.

1 Like