Raid, Possible uptime hdd reduction?

Hi, so another couple of quick question. I have a raid controller with 4gb
Ddr4 with battery backup. Is it save to disable sync so zfs will use async and so bypass the zil? (Raid 5 on controller → zfs disk

Or make a raid 1 ssd for zil on raid?
Power loss, less problems?

How wil zfs volume react if I should chose to use raidcontroller and then expand raid?

Configure your raid controller to IT mode, so that it exposes individual disks. Don’t use any kind of hardware raid. Let ZFS manage disks directly. This is extremely important.

I’m not sure why you want to use that controlller in the first place — that motherboard has two miniSAS ports, you can connect at least 8 disks there directly. Plus a bunch of sata ports for SSDs if you decide to use special vdev. I highly recommend using a mirror of enterprise SSDs with PLP for that, again, eBay is your friend. Configure pool to send small files up to 16k to special device, assuming default record size.

If you want single disk redundancy — use zraid1 vdevs. How many vdevs to use and how many disks per vdev depends on your other uses and performance requirements.

There is no need to use redundancy for slog. There is no need for slog for storj in the first place. If you want slog for other uses — use 16GB Optane, they are cheap on eBay, under $10. Get a UPS, regardless.

Yes, you can disable sync for storagenode data, including databases. Find my post about optimizing ZFS for storj for further details, but generally leave everything else at defaults.

Because I have an expensive raid controller that I have just laying around :smiley:

Its gonna be to start:
2 x ssd 1tb raid 1 special vdev
4 x 8tb sata raidz1

I have suspected this to be the case :). But don’t try to find a problem for a solution :). Sell the controller or recycle it. It’s useless. Modern filesystems want direct access to disks. Era of hardware raid has ended long time ago. Software raid has won.

Perfect. Make sure your SSDs for special device support PLP. Remember, that vdev will hold metadata for entire pool, if it’s damaged — entire pool is dead. Even if you have a UPS.
Don’t use consumer SSDs, like Samsung Evo there. You want write optimized ones. For example, Intel DC S3700 series or similar.

Think hardware raid isn’t dead, I am still wondering about mine… → Adaptec 3154-8I16E.
Think this raid system will take a lot of compute from the server but it’s more for 100+ disks ?
The SSD is gonna be 2x nvme samsung 870 or normal ssd. Not gonna spend 500+ when I have these laying around :slight_smile: when they brake I will be trying to replace them

I got a pair for way under $100 on eBay. Example: 400GB Intel DC S3710 Series SSDSC2BA400G4 SATA 6Gb/s 2.5" SSD | eBay

These are not suitable for 100% random IO workload.

  • If you lose power during write you will lose data on the pool.
  • they contain small SLC cache in front of TLC bulk storage. It’s a tiered caching solution for desktop use.
  • you will wear them out ridiculously quickly and if they both die — your pool is dead.
  • they lie about sector size, you would need to override ashift.

Since you don’t have SSDs for special device and don’t plan on buying correct ones — don’t use special device. You can add it to the pool later, once you get correct SSDs. Or never, if performance is sufficient.

good thank you for all the advice!

1 Like

Do not use that RAID card
ZFS does not like hardware RAID. Also most passthrough modes are poor. Use the mini-SAS as has already been mentioned

1 Like

2 posts were split to a new topic: Docker container won’t work, keep asking for the identity.cert

Here is details https://www.supermicro.com/products/nfo/files/LSI/LSI_PB_Nytro_MR.pdf

I agree with the other response as in “probably not”, but if you have a SSD spare putting the database files on it will take a little wear off the raid.