64kbit stripe size maybe a bit small… but most likely not worth the effort to change it…
nor can i tell you that you will for sure benefit from that… you will have to get some benchmarks on that, also depends on where you database is located, how many RAM you system has and if you are using any large caching solutions.
sorry that it isn’t a simply answer… 64kbit stripe size was the recommended for a long time if people don’t know what to set it to… to really optimize it you will have to account for the number of disks and the sector size so a full stripe writes on every drive in the array.
20% iowait shouldn’t make your storagenodes fail audits… tho if you are running 3 nodes on the same raid6 array i’m pretty sure i can tell you where your problem is…
a raid6 array has the same IOPS as 1 hdd because they run in harmony / sync… on top of that if your stripe size isn’t the correct size compared to your array, then you will also get writes that overlap all of the array and then starts over again… without writing the full way through the array on the last part of the stripe… which means you will end up having drives running out of sync, which will cause further latency.
on top of that all the drives should be the same to futher reduce issues in the harmony / sync of the drives, else you might see excessive wear, performance decreases or both.
it’s very difficult to fail audits, ofc if it’s a cpu issue which it might be… but that would mean you are doing software raid6, because with a conventional raid card you shouldn’t see a massive cpu utilization just because you are doing work on the array, the raid controller should do that… and because it’s hardware is designed for the tasks it can do the job much more efficient than a cpu.
raid6 calculations are fairly demanding
you shouldn’t run multiple nodes on the same raid array and you really should use a raid card if you want to run raid6