Tuning the filewalker

dumpe2fs command is used to print the super block and blocks group information for the filesystem present on device.

  • Can be used with ext2/ext3/ext4 filesystem for information.
  • The printed information may be old or inconsistent when it is used with a mounted filesystem.
  • Don’t forget to unmount your partition before using this command
  1. So do I have to unmount the partition? Isn’t this making my NAS to go to "blue screen’, because the storagenode and the OS are running on the partition?

  2. And how to get the device name?

Edit: found it with mount

/dev/md0 on / type ext4 (rw,noatime,data=ordered)

It seems I don’t have dumpe2fs. I tryed manual, help, version; it sais command not found.
Any alternatives? Or how to install it?
Synology uses a custom distro based on:

Linux version 4.4.302+ 
os_name="DSM"

Mounted is fine for this case, the entries important here do not change.

Sorry, I’m not familiar with Synology, so can’t help here.

Filesystem volume name:   
Last mounted on:          /
Filesystem UUID:          
Filesystem magic number:  
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode filetype needs_recovery extent 64bit flex_bg sparse_super large_f                                                                                                                                ile huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              155648
Block count:              622544
Reserved block count:     25600
Free blocks:              227942
Free inodes:              113100
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Reserved GDT blocks:      303
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       
Last mount time:          
Last write time:          
Mount count:              4
Maximum mount count:      -1
Last checked:             
Check interval:           0 (<none>)
Lifetime writes:          21 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       49517
Default directory hash:   half_md4
Directory Hash Seed:      
Journal backup:           inode blocks
Journal features:         journal_checksum journal_incompat_revoke journal_64bit
Total journal size:       64M
Total journal blocks:     16384
Max transaction length:   16384
Fast commit length:       0
Journal sequence:         0x0001d3d9
Journal start:            2332
Journal checksum type:    crc32

This drive is Exos 16TB 512e formated.
Now it’s almost full, with 14.5TB data.
Can you take a look and see if anything can be impruved?

Are you sure this is the right file system? It seems to have only 155k files, whereas we’d expect a 14TB worth of blobs to have tens of millions of files. Besides, it reports only 21 GB of data being written. Given last mounted, I suspect this is a system partition, not a data partition.

Yeah the “Last mounted on: /” points to it being data for the wrong drive.

I believe mount command gave me the DSM partition, md0. I will try md1.
The dumpe2fs I managed to install from synocommunity packages, if anyone wonders. There is a DiskCLI disk tools package.
If I put a wrong/non-existent device name, is there any problem? Can it crash something?

Nope. This tool only reads information.

I can’t find it. I ls the dev directory, and I tryed all that resembled a partition, mdx, sgx, sata1px… I get the same output from md0, sata1p1, sata2p1. Others gave me errors. I don’t see any sda, sdb etc. Maybe Synology hides the data partitions or something.

You may see them with df --si -T

1 Like
Filesystem        Type      Size  Used Avail Use% Mounted on
/dev/md0          ext4      2.5G  1.5G  858M  64% /
devtmpfs          devtmpfs  9.4G     0  9.4G   0% /dev
tmpfs             tmpfs     9.4G  431k  9.4G   1% /dev/shm
tmpfs             tmpfs     9.4G   17M  9.4G   1% /run
tmpfs             tmpfs     9.4G     0  9.4G   0% /sys/fs/cgroup
tmpfs             tmpfs     9.4G  713k  9.4G   1% /tmp
/dev/vg1/volume_1 ext4       16T   15T  1.7T  90% /volume1
/dev/vg2/volume_2 ext4       22T  1.2T   21T   6% /volume2

vol1 512e:

Filesystem volume name:   1.44.1-42661
Last mounted on:          /volume1
Filesystem UUID:          
Filesystem magic number:  
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              488144896
Block count:              3905159168
Reserved block count:     25600
Free blocks:              304296383
Free inodes:              434313675
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Reserved GDT blocks:      185
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         4096
Inode blocks per group:   256
Flex block group size:    16
Filesystem created:       
Last mount time:          
Last write time:          
Mount count:              30
Maximum mount count:      -1
Last checked:             
Check interval:           0 (<none>)
Lifetime writes:          33 TB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     36
Desired extra isize:      36
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      
Journal backup:           inode blocks
Journal features:         journal_checksum journal_incompat_revoke journal_64bit
Total journal size:       1024M
Total journal blocks:     262144
Max transaction length:   262144
Fast commit length:       0
Journal sequence:         0x04a5b026
Journal start:            36611
Journal checksum type:    crc32

vol2 4Kn:

Filesystem volume name:   1.44.1-64570
Last mounted on:          /volume2
Filesystem UUID:          
Filesystem magic number:  
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr filetype needs_recovery extent 64bit flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              335527936
Block count:              5368446976
Reserved block count:     25600
Free blocks:              5073318392
Free inodes:              331274292
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2048
Inode blocks per group:   128
Flex block group size:    16
Filesystem created:       
Last mount time:          
Last write time:          
Mount count:              5
Maximum mount count:      -1
Last checked:             
Check interval:           0 (<none>)
Lifetime writes:          1863 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     36
Desired extra isize:      36
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      
Journal backup:           inode blocks
Journal features:         journal_checksum journal_incompat_revoke journal_64bit
Total journal size:       1024M
Total journal blocks:     262144
Max transaction length:   262144
Fast commit length:       0
Journal sequence:         0x0051ee3e
Journal start:            206237
Journal checksum type:    crc32

So, yeah, your metadata takes around 17.5 GB:

((335527936 - 331274292) + (488144896 - 434313675)) * 300 bytes

(300 bytes covers an inode and a direntry). If the box has 18 GB of RAM, then I’d assume Synology itself also needs some of that RAM, so it’s crossing the limit.

I don’t see any other problematic settings there except for lack of the dir_index feature, but after googling a bit this seems to be a Synology-specific thing. Weird, but should not affect the used-space file walker.

Still weird that when this box was only 9.5TB, it was that slow.

1 Like

I believe I was using those 2 parameters with 4MiB, that cache the incoming pieces. I saw an increase in buffers, that took like half of RAM. Now I put them to default. One is 128KiB and the other still 4MiB; now buffers occuppy like 10% of RAM. Those test were done 1 year ago.

looks like there is an bug regarding the lazy.
its not working even on verry fast node.

I guess not?
It doesn’t fail on my three nodes (1 Windows service, 2 Docker Desktop for Windows).
But I do not use BTRFS/ZFS/NFS/SMB, and no RAID, the plain NTFS and 1 node per HDD.

Same here. Then mine is simply to fast for the lazy, obvious.

1 Like

Why dosen’t FileWalker just reads the file system and inodes?
Every file (piece) has an entry there, right? If the entry is missing, than the file is lost. If the entry is there but the file is missing/ corrupted/ replaced by empty file or anything else, than it will fail audits anyway.
So if we already have the audits in place, why do we need another service to check the same thing?

It’s not for the audit purpose I suppose. It’s to report the correct used space.
Please note, the satellites operates with segments, not nodes, so they need to have a confirmation that there is a free space on the network to upload a piece of each segment (but it’s a separate process in the node selection).
Perhaps, if it would be more monolithic and have an information about the whole picture, it could be accounted… but in expense of the response time…
And this is where we want to have improvements.
Right now it’s implemented this way - to offload the load of calculating the available space to the nodes. As a result the node selection is fast and simple, so the customer is not forced to wait for the reports from all 110 requested nodes during upload for each segment.
Of course there is a trade-off if the node is overloaded or doesn’t have a free space actually… right now it’s accepted… But I would expect improvements in this regard.