STORJ-Knoten im unRAID-Array betreiben

Hallo zusammen,

unRAID bietet sich allgemein sehr gut an, das Betriebssystem für Docker und damit auch STORJ-Docker zu sein.

Man kann es da aber auf zwei unterschiedlichen Wegen betreiben.

Weg-1:

Man lässt die Festplatten, sagen wir 10 Stück zu je 18 TB, alle einzeln als Unassigned Devices und erstellt 10 STORJ-Docker dafür. Die weist man mit dem --mount-Befehl am besten den HDD-Seriennummern zu.

Weg-2:

Man fügt alle 10 Festplatten zu einem Verbund, dann noch mit einer M.2-NVMe als Cache hinzu und erstellt 10 STORJ-Docker auf dem Verbund. Der Mount-Befehl wäre praktisch in /mnt/user/storj und in dem Ordner würden die 10 STORJ-Knoten liegen.

Welchen Weg würdet ihr nehmen? Habe eigentlich immer Weg-1 bevorzugt und betreibe diesen auch, da man damit flexibler auch bei anderen Installationen ist. Weg-2 kann aber besser und schneller sein, da noch die Cache-SSD da ist.

Danke und Gruß,

Walter

I would go for option 1. That way, you don’t lose a disk to parity. Ofcourse this means that all data is lost when a drive fails. But that is only the case for 1 disk. If, for whatever reason, your whole array from option 2 fails, you lose all data from all nodes, because the data is spread across all drives

Are you also running multiple nodes on like unRAID? Do have have long term experiences?

For some weird reason the RAM-usage of some Docker-Containers is high, like 2GB sometimes, but then comes back to ~500MB. Im pretty sure its performance related, thats why way-2 is kinda better due to the SW-RAID improvement plus cache-SSD.

Die Regel ist dieselbe für separate Festplatten und für RAID: ein Knoten pro Festplatte/RAID-Volume.

I am running multiple Nodes on an unraid server and I use unassigned devices. So I use single disks for the nodes.

I moved the databases to the NVMe cache pool which increased the performance a lot!

OK this is very interesting.

Are you also having all the drives in Unassigned Devices and one M.2-NVMe in Disk-1? Then an additional NVMe as Cache?

Which databases do you mean exactly?

Thanks and kind regards,

Edit:

Here is my config:

Removed the serial numbers. Would you add an additional M.2-NVMe to the cache here? And how to improve performance? Does it increase the performance to bring the Identity on Disk-1? Currently the Identitys are lying on each HDD.

I am using the server for other things as well.
For all my nodes I use dedicated drives that I mount as unassigned devices.

I created a share in the array where I set the cache settings to „Prefer cache“. Into this share I moved all the sqlite database files of my nodes like here:

This is reducing the IO pressure on the unassigned devices disks a lot.

As cache pool I am using the normal cache pool of 2 WD NVMe disks that I use for the array.

Here is my setup:

And you pooled the Cache for safety reasons? What happends when the DB files are damaged and gone? Is the node then also going to be disqualified or can it run without the DB files or with new DB files?

Yes as the cache is very important also for my other tasks, I don’t trust a single SSD. I sleep better when know that all the important stuff is pooled :slight_smile:

The node can survive losing the databases, but you will also loose all your statistics.

But how would be the recovery if all the databases are lost. Is there some guide already?

Edit: Second Question: Can unRAID/Linux handle PCIe 4.0 x4 NVMes? Does it bring perfomance to include such devices? Which FS is to be used on NVMes? XFS or?

If all databases are lost, you can use this guide to re-create all of them: https://support.storj.io/hc/en-us/articles/4403032417044-How-to-fix-database-file-is-not-a-database-error