EXT4 And Storagenode Free Space & Optimization

More scientific, I created this script:

lsblk -o PATH,LABEL,FSTYPE | grep -iE "STORJ-DATA\s+ext4$" | while read sLine; do
	sDev="$( echo $sLine | sed 's/\s\+.*//g' )"
	sDump=$( tune2fs -l "$sDev" 2> /dev/null | grep -iE "^((Inode|block) count|Free (Inodes|blocks)):" | sed 's/[^[:space:]0-9]//g' )
	echo "$sDev"
	echo $sDump | awk 'NR==1 { printf ( "Blocks per inode: %.2f\nUsed block per used inode: %.2f\nRatio: %.2f\nUsed block fraction %.2f\nUsed Inode fraction %.2f\n\n", ( $2 / $1 ), ( ($2 - $3) / ($1 - $4) ), ( $2 / $1 ) / ( ($2 - $3) / ($1 - $4) ),  1 - ($3 / $2), 1 - ($4 / $1) ) }'
done

Resulting in:

/dev/sdb3
Blocks per inode: 16,00
Used block per used inode: 62,90
Ratio: 0,25
Used block fraction 0,03
Used Inode fraction 0,01

/dev/sdc2
Blocks per inode: 16,00
Used block per used inode: 85,26
Ratio: 0,19
Used block fraction 0,74
Used Inode fraction 0,14

/dev/sde2
Blocks per inode: 16,00
Used block per used inode: 79,76
Ratio: 0,20
Used block fraction 0,78
Used Inode fraction 0,16

/dev/sdf1
Blocks per inode: 16,00
Used block per used inode: 78,15
Ratio: 0,20
Used block fraction 0,05
Used Inode fraction 0,01

/dev/sdg1
Blocks per inode: 16,00
Used block per used inode: 64,86
Ratio: 0,25
Used block fraction 0,92
Used Inode fraction 0,23

/dev/sdh1
Blocks per inode: 16,00
Used block per used inode: 73,52
Ratio: 0,22
Used block fraction 0,02
Used Inode fraction 0,00

/dev/sdi1
Blocks per inode: 16,00
Used block per used inode: 78,16
Ratio: 0,20
Used block fraction 0,14
Used Inode fraction 0,03

/dev/sdj1
Blocks per inode: 16,00
Used block per used inode: 71,78
Ratio: 0,22
Used block fraction 0,20
Used Inode fraction 0,04

/dev/sdk1
Blocks per inode: 16,00
Used block per used inode: 42,58
Ratio: 0,38
Used block fraction 0,64
Used Inode fraction 0,24

/dev/sdl1
Blocks per inode: 16,00
Used block per used inode: 37,72
Ratio: 0,42
Used block fraction 0,42
Used Inode fraction 0,18

/dev/sdm3
Blocks per inode: 16,00
Used block per used inode: 85,85
Ratio: 0,19
Used block fraction 0,12
Used Inode fraction 0,02

/dev/sdo1
Blocks per inode: 4,00
Used block per used inode: 42,88
Ratio: 0,09
Used block fraction 0,84
Used Inode fraction 0,08

/dev/sdp1
Blocks per inode: 16,00
Used block per used inode: 76,94
Ratio: 0,21
Used block fraction 0,22
Used Inode fraction 0,05

/dev/sdq1
Blocks per inode: 16,00
Used block per used inode: 71,41
Ratio: 0,22
Used block fraction 0,60
Used Inode fraction 0,14

/dev/sdr1
Blocks per inode: 4,00
Used block per used inode: 46,99
Ratio: 0,09
Used block fraction 0,78
Used Inode fraction 0,07

Since they’re all using 4kb-blocks, it means the files appear to be on average 150-320kb.
It’s interesting to see, the difference between the nodes…

So I guess my free space is fine here. :slight_smile: Was wondering whether I can give the nodes some more and leave the free below 13%. Both disk arrays are made of 2 x 12 HDDs of 4TB in hardware RAIDs on separate devices. I think it was RAID 60 with parity count 2. Would not like to reboot to double-check. :smiley: :smiley: :smiley: Anyway… Maybe nowadays, with TB-sized filesystems being the norm, free space could be safely scaled back to, say, 2-3%!?

You may try, but if you have a discrepancy (Disk usage discrepancy?), it may use more than allocated. Even if it should stop to receive more data, if there only 500MB left, but it’s too risky.