My lsblk command on my Pi4 showing my 8TB marketed disk as 7.3 TB. I understand that the calculation of disk sizes can be different within the OS. But it confuses me a little bit however, in deciding what value to put up in the STORAGE parameter.
Do I ignore the OS calculated size, and do I add 8TB (minus headroom)
Do I follow the OS calculated size and add 7.3TB (minus headroom)
Additionally, talking about headroom:
Should that be a relative value: like 10% of any disk size?
Or an absolute value: like, 100mb headroom is fine for any disk size.
Thanks @Iigloo but this doesn’t really answer my question. Not to say, that I’m upgrading to an 8TB - so I have quite some data already. And I’m keen to get everything out of my disk-storage.
According to the calculation in the existing posts, I can only use 6.8TB of the 8TB. Which feel like A-LOT of headroom to account for. Does the 10% still count for bigger sized HDD’s?
The question is also asked in your referenced post, but not answered.
Simplified question: I have an 8TB disk. What would you / should I enter in the storage parameter?
Disk /dev/sda: 6001175126016B
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17408B 20479B 3072B Free Space
1 20480B 6001175109119B 6001175088640B ext4 STORJ_PART
Filesystem 4K-blocks Used Available Use% Mounted on
/dev/sda1 1464151684 1462292531 1855057 100% /mnt/node1
Filesystem 1B-blocks Used Available Use% Mounted on
/dev/sda1 5997165297664 5989550206976 7598313472 100% /mnt/node1
I have STORAGE=“5970GB” in the parameter, the node is not getting new uploads except for the brief moments when something in the trash is freed. The node always stops receiving data leaving about 500 MB short of the max number that you pased as parameter.
If you are running the node as a normal user, just make sure that there isn’t some space reserved in the filesystem. If running as root it doesn’t matter.