What is the best file system for storj node?

Coudnt find, so what is the best file system for storj node? especialy large discs 6-10TB?
NTFS? Ext2-4? exFAT? FAT32 ;D ?
And what Cluster Size? Under Windows 10

best Regards

The best filesystem for the OS is native.
Since you are on Windows the answer is obvious - NTFS with default parameters.

4 Likes

I’m running StorJ in Docker on my Synology DS412+, which has BTRFS.
Running for last 3 months.
No issues.

1 Like

I am also running BTRFS, no problems or errors related to my array

Please, don’t use the BTRFS on regular systems except Synology. They fixed most of bugs and it’s working fine on Synology.
Unfortunately they didn’t share the code with a community. The mainstream of BTRFS code is still not ready for production.

1 Like

I have to disagree on that blanket statement with you. You base it on the fact that I reported failed audits and after I mentioned that I use BTRFS you quickly blamed BTRFS for that. But the truth is, that there was a bug in Storj where already deleted pieces got audited and therefore failed. Failed Audits (v0.21.1)

The bug seems to be fixed now and I don’t have failed audits anymore since then.

BTRFS on a single disk is perfectly fine. The BTRFS status page only considers RAID5 unstable and the issue you quoted affects only a 2 drive RAID1 when 1 drive fails and the user mounted it a second time in degraded state. I have a 3 drive RAID1 as my media storage and after 1 drive failed and multiple degraded mounts I was able to replace the disk without problems.

This is just a link with explanation why I do not recommend to use BTRFS on regular systems

What about ZFS? I would like find the best file system for handle IOPS. I am hosting 5 nodes on proxmox server. IO wait to hight with ext4.

Hello @oty,
Welcome to the forum!

See: Topics tagged zfs

and

you should run one node per disk. If you use RAID with parity, the whole pool will work as a slowest disk in the pool. This is true for any implementation. The speed will be :1st_place_medal: LVM ext4, :2nd_place_medal: btrfs, :3rd_place_medal: zfs. For the single disk configuration it will be the same. However, in case of zfs you may improve with throwing resources to it (SSD special device, more RAM, more tuning).
See why (it’s about BTRFS, but it’s a FS with a greater Metadata too, which is applied to zfs as well):

See also

Hmm. Interesting. Will try move one node to ZFS single disk + special device. I have planty of free RAM and good enough CPU.

And yes. Every node located on separated HDD. Maybe I should buy SCSI HBA controller.

You may compare only on a full node usually. The small node will work faster at the beginning on BTRFS, but after some time it will become very slow. zfs doesn’t have this “feature” though - it works steady independently on size like ext4, but usually in 2-3 times slower without a tuning than ext4.

Hmm. Very interesting.

With full respect, isn’t this part of you advice somehow slowly becoming slightly misleading? It looks like despite providing links to various posts on this forum, now you are providing info only about vanilla configs for btrfs and zfs. How is the tuning changing the perspective … and … how about XFS with metadata separated to another location?

Please post your link to this comparison - it will give more opinions to the author.
BTRFS and zfs here were mentioned because the question was about zfs and high iowait for ext4

so, I do not know how to offer a fair comparison to answer this question, you may try better :slight_smile:
my opinion is based on mine tests:

I hear you @Alexey! However, this is the second thread on this topic recently and it looks like you are again asking me to do some testing on your behalf. Please kindly be informed that I am not planning to do any comparisons in the foreseeable future on this topic. Such tests would require significant commitment of time and in general I think that such a task lays in the area of Storj Inc. competencies. So what is the best filesystem to run a node, if you are a beginner, an intermediate or an advanced user, is it ext4 in all of those cases?

I do not want to test anything, because my nodes are far away from the proposed configurations, they are strictly as described: no RAID, no weird FS, 1 Windows, 2 docker nodes. Just to make sure that our default recommendations are still works.

You always suggest to do tests on some weird FS or configurations, so I always ask you to do it yourself, please forgive me for that.

2 Likes

I hear you again, so if you do not want to do any testing please do not ask anybody to do any testing on behalf of Storj Inc., particularly, I hope that you are not going to ask me again to do any testing on behalf of your employer.

The case is about your advice on questions about performance of different filesystems. As I wrote two posts above, it seems that your advice on this topic sometimes and somehow slowly becoming slightly misleading, or to use other words, maybe your advice is not quite precise enough, not sure if you noticed.

So if you do not have any precise test results that stand behind your advice on the topic of filesystem performance and you are using some other people words, please do stay calm and let the topic flow.

BTW, those are not “some weird FS or configurations” just the situation is starting to look like you guys are beating the drives again and again without any reasons.

I can only emphasize the convenience of these kind of setups. I still regret my first node I set up, using mergerfs and some further nodes choking on BTRFS.

Moreover, additional layers like md-/zfs-/btrfs-raid and such, make it more complex and are actually doing additional the same thing the network as a whole is doing already: adding redundancy in order to be resilient to some data losses.

1 Like

Yeah, mergerfs might be too heavy and too fragile at the same time for a task that it being discussed here, however, to challenge @arrogantrabbit ZFS referential config, I was wondering about XFS in this thread and about Lustre in the other. What do you think, @arrogantrabbit?