Static Linux mount mergerfs issue

Hi,

I’m setting up a test rig using an old computer with 5x1TB drives on Ubuntu 19.10. I created a volume with mergerfs to combine these into one share, but it’s not considered a “block type” and doesn’t pass the test command using “lsblk”.

What are my options? I’m fairly new to Linux.

Thanks,
alfananuq

Hello @alfananuq,
Welcome to the forum!

  1. Do not use mergefs
  2. Do not use any RAID0 variations, include simple LVM
  3. We recommend to run one node per HDD.
  4. If you really want to merge all disks, then use the RAID with parity (RAID5, RAID6): https://raid.wiki.kernel.org/index.php/RAID_setup
2 Likes

Run 1 node per hard disk. Start the next one when the previous one gets ~75% full.

1 Like

As I just said in another thread, it is not clear what is the “best” solution, but what does work is use lvm and add the disks as space fills up or run a separate storagenode for each disk.
Separate nodes for each disk is more resilient but adding disks to ivm is quicker (and easier)
If you just use your first disk through lvm then you can decide what to do if the first 1TB fill up, which TBH is months downline…

ps in disagreement with above take the hit on raid0 reliability or separate node per hdd. raid5 is a choice but the work load for a storagenode operator is low and doesn’t see a benefit from wasting a disk on parity, unless you balance of escrow payments versus vetting times ad infinitum!

1 Like

And this is not better than a mergefs. This is RAID0. With one disk failure the whole node is lost.

2 Likes

Thanks for all the responses!

As this is a first test for me, I ignored the risk of disk failure, but it’s still good to point out, thanks! I’ll be setting up different hardware when I’ve gotten the hang of it.

I was, however, unaware I was able to run separate nodes for each hard drive. I’ll dive more into this and see how I can implement this.

Thanks again!

With a docker it’s a simple setup - each node on own external port. They can share the same external address and the same internal port.
For example, -p 28967:28967 -p 127.0.0.1:14002:14002 -e ADDRESS=external.address:28967 for the first node, the -p 28968:28967 -p 127.0.0.1:14003:14002 -e ADDRESS=external.address:28968 for the second and so on.
Those are excerpts from the full docker run command.
For each node you must have an signed own identity, do not clone them - they will be disqualified.

Also, keep in mind, all those nodes will share the same traffic, so they will not receive more than a one node.
Each new node must be vetted. It can receive only 5% of potential traffic until got vetted. To be vetted on one satellite it should pass 100 audits for it. For the one node it should take at least a month
In case of multisetup all that small amount of traffic will be distributed between all nodes behind the same public IP (/24 subnet of public IPs to be precise) and the vetting process will take in the same amount of time longer than a number of nodes.
So it’s better to start a next one only when the previous almost full. In this case the vetting process will not take forever.

2 Likes

Can you please stop recommending LVM to obvious beginners? It’s fine if you use it and know about the risks.

It is clear what the best solution is. It is the recommendation from Storj! It is not really difficult to set up multiple storagenodes on the same server.

1 Like

I’m happy to report, my first node is up and running!

Thanks again for everyone’s help!

2 Likes