How to decide on disk size, based on Linux lsblk size? (see details)

My lsblk command on my Pi4 showing my 8TB marketed disk as 7.3 TB. I understand that the calculation of disk sizes can be different within the OS. But it confuses me a little bit however, in deciding what value to put up in the STORAGE parameter.

  • Do I ignore the OS calculated size, and do I add 8TB (minus headroom)
  • Do I follow the OS calculated size and add 7.3TB (minus headroom)

Additionally, talking about headroom:

  • Should that be a relative value: like 10% of any disk size?
  • Or an absolute value: like, 100mb headroom is fine for any disk size.

I hear different stories of this one.

Thank you!

Just set 7TB. It will take at least 2 years for you to reach 7TB stored data. I am on my 17th month and I only have 3.58TB stored.

Thanks @Iigloo but this doesn’t really answer my question. Not to say, that I’m upgrading to an 8TB - so I have quite some data already. And I’m keen to get everything out of my disk-storage.

8 TB = 7.275957614 TiB
These are different units.
You can use either unit as storage parameter.

2 Likes

@jammerdan thanks. However with df -HT and with lsblk, sizes are communicated with only T. Hence the confusion.

  • lsblk communicates 7.3T as the size. That corresponds with your rounded calculation
  • df -HT communicates 8.0T as the size, and 7.6T as the available space.

So there is still some discrepancy between size mentions.

You have probably not taken into account the reserved space:

2 Likes

This could help as well:

2 Likes

According to the calculation in the existing posts, I can only use 6.8TB of the 8TB. Which feel like A-LOT of headroom to account for. Does the 10% still count for bigger sized HDD’s?

The question is also asked in your referenced post, but not answered.
Simplified question: I have an 8TB disk. What would you / should I enter in the storage parameter?

I think you don’t need to reserve 10% of the disk, that is too much waste
let me show you a concrete example:

I have a 6TB disk that has been full for months.

User Capacity:    6,001,175,126,016 bytes [6.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical

Disk /dev/sda: 6001175126016B
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End             Size            File system  Name        Flags
        17408B  20479B          3072B           Free Space
 1      20480B  6001175109119B  6001175088640B  ext4         STORJ_PART
Filesystem      4K-blocks       Used Available Use% Mounted on
/dev/sda1      1464151684 1462292531   1855057 100% /mnt/node1

Filesystem         1B-blocks          Used  Available Use% Mounted on
/dev/sda1      5997165297664 5989550206976 7598313472 100% /mnt/node1

I have STORAGE=“5970GB” in the parameter, the node is not getting new uploads except for the brief moments when something in the trash is freed. The node always stops receiving data leaving about 500 MB short of the max number that you pased as parameter.

Screenshot

If you are running the node as a normal user, just make sure that there isn’t some space reserved in the filesystem. If running as root it doesn’t matter.

tune2fs -l /dev/sda1  | grep Reserved
Reserved block count:     0
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)

I hope this makes you some idea.

2 Likes