I kindly asked you @Alexey recently not to ask me to do any testing as it seems to me like a domain of storj. Maybe also @snorkel’s friend could provide you with some additional advice on this topic. :- ) I have to really say, sorry, man. Im currently running some nodes with XFS
but my setup is nonstandard. I believe it is fully in line with storj ToS but you really dont want to hear all of its details as in a nutshell Im running XFS
on top of Lustre
with meta on ext4
. Nevertheless, I am surprised to hear about all those problems related to XFS.
For a lightweight setup I would probably try XFS
storing data blocks with XFS
meta being separated to nVME (if possible). On paper it looks to me like a great solution. Should it not work, I would probably blame storj. As for an agile and super reliable setup, I would probably go for a combination of Lustre
and ZFS
. Possibly highly tuned by @arrogantrabbit. Lustre
working as a read cache and ZFS
cached on the write side (something like a combination of Amazon File Cache
and Oracle ZFS Storage
) :- ). As for problems with rsync,
I do not think it is only related to disk seek time constraints but I cant provide you with a precise explanation. I can refer to my other post on this topic, however, small disclaimer is, that the focus of the post is mostly on data transfers over WAN
. Nevertheless, I still believe there is some useful info in case of local transfers as well. I will also try to update this post with a few other tools possibly later today. Cheers.