Best filesystem for storj part 2
Guys, hello everyone.
It’s been barely a year since the new year, and I’ve almost worn out my hard drive with tests and prepared another note on file system performance comparison.
Today, I looked at the impact that the LVM layer has on different file systems on it. For testing, I used fio with the same parameters as in the first part, two identical tests were conducted for the file system on LVM and without it. The partition still consists of 2400G, no data was previously written to it, the test was conducted on an empty file system. In the case of BTRFS, the result was slightly unexpected, I have no explanation why it turned out this way, at first, I even thought that I accidentally ran the test on an SSD, but after running it again, the picture turned out to be exactly the same. Take a look at the graphs.
Comparison of reading with LVM and without in MiB/s (higher is better)
Comparison of recording with LVM and without in MiB/s (higher is better)
I also supplemented the main test with the file systems ntfs and btrfs+meta. In btrfs, it is not possible to completely move the metadata to a separate disk, but I managed to create a semi-hybrid volume from ssd + hdd, in which the metadata seems to be in raid1 mode (that is, on both disks), and the data in single mode, only on the hard disk. I reproduced the file listing in Windows using the command dir /s /b, and initially decided to calculate the folder sizes using the utility du from Mark Russinovich, however, during the process of calculating the folder size, the calculation time seemed suspiciously long (the code seems to be suboptimal), so I measured the time simply by opening the folder properties and using a timer to measure how long it took to determine the size of the occupied space.
Reading and writing fio in MiB/s (higher is better)
File listing time and space occupancy calculation in seconds (less is better)
In the next note, I plan to reproduce the operation of the storj node using a script (in terms of writing and deleting files) in such a way that the disk is thoroughly fragmented, and to measure the time it takes to copy the data (a la rsync and robocopy), as well as to measure the time it takes to delete all files.
Subscribe to my Telegram channel Telegram: Contact @temporary_name_here and
Stay tuned…