zfs is different, and zfs on linux doesn’t have all the same features and even if one can remove a drive, then it is a patch at best.
zfs doesn’ have a structure like a normal raid when writing to the drives, so basically when running zfs raidz,1,2 or 3 on a pool then one can added additional vdevs(raidz,1,2,3 or mirror) basically it makes best sense to add vdevs that are similar to the existing ones, tho not required, however even mirrors cannot be removed again from the pool.
its something to do with how zfs handles parity information, if memory servers because zfs has variable recordsize (blocksize) thus you cannot predict the outlines of the different blocks easily, and thus if you have to separate the data in the way that it’s writing is very difficult.
so i had 5 x 6tb drives and 4x 3tb drives all used in the same pool by making 3x6 tb hdds in raidz1 and then 2 additional vdevs of 3x 3 tb (some 6tb) in raidz1
which gave me 12 tb from the first vdev and another 2x6tb from the last two.
on this 24 tb pool on which one can store like 21-22tb so more like 20 when the 10% storj buffer is added.
and because 2 of the drives where 6tb but used as 3tb that was another 6tb lost, the storagenode was 14tb in size and because i would want to expand from a 3 hdd raidz1 to a 4 raidz1 to have less loss to redundancy, then i was locked into buying the same model 6tb hdd’s i had 5 of already, so they would work in harmony with all the others, in theory this is only really relevant pr vdev… and i could have got 5 hdd raidz1 but that wasn’t an options for many reasons.
adding 3 x 6tb hdd’s gave me an additional 18tb which i could use something like 16tb and my node was 14tb and because i would need to destroy my old pool, which i also would need to empty first.
and i didn’t want 1 big drive for iops and redundancy reasons.
then my storagenode was getting to a size where it was becoming near impossible for me to move it out of it current pool, without making the upgrade even larger.
this is partially also why i’m considering in the future maybe only using mirror pools, which doesn’t suffer from the same issues and can be taken apart and added to at will and also re-balanced across the drives.
yes you can keep adding to a zfs pool, but you cannot remove capacity from the pool in case of raidz, thus i was getting very close to being stuck with the current pool until i bought like 3x 12tb
but in that case i would still have 6tb wasted in the pool because i didn’t have enough working 3tb to replace my 2 6tb hdd’s
in zfs on bsd you can remove drives from a vdev, but that will inherently affect the structure of the data on the pool and if one can avoid using it, one should… tho i duno how relevant this is in practical terms, but in theory it seems like a very bad idea… due to how zfs works.
even tho i migrated the node to a temporary span of 3 x 6tb drives, i still wasn’t able to get it move directly into a 2 x 4 hdd raidz1 pool, but had to do it to a 1x 4hdd raidz1 pool and then added the second vdev of 4hdd in raidz1 to the pool.
which means all my storagenode is on 1 vdev and even now a little week later only 80gb is on the added vdev, which hurts my read iops, in most cases until its more balanced out…
again, i’m not even sure i can balance the data between the vdevs, because it’s to complex because of the whole variable recordsize thing, zfs isn’t magic, it has it’s own problems…
in most cases it won’t loose your data, without screaming at you first…but it’s not designed to be run on a medium level… you either run small setups or go large… anything inbetween is just annoying to work with when it comes to maintenance and long term usage.
TL;DR
ZFS / ZoL cannot in it’s current version to my knowledge, remove hdd’s from a raidz setup, due to the issues caused by having variable blocksizes / recordsizes, ofc you can expanding it into using 100’s of hdd’s
had i waited much longer it would had been difficult to dismantle my old pool and migrate to a new one, for more reasons that i mentioned above …