Precisely why I didn’t want to bother with it. Sure it would help, but long term, the more data hosted the faster these get chewed through. Figured if they can run fine without the extra expense why not. Plus, this way there’s also more room for spinners not taken up by cache drives. Figure the redundency factor isn’t worth the occasional drive lost. I still scrub them though, so if I start getting errors I’ll just attempt a drive replacement before it fails.
Never tried this way. I was worried that being on a large pool that was already struggling with the IO limitation it would still take some time and didn’t want to have the nodes down that long. All worked out using rsync though, just took for ******* ever.