SSD vs HDD what is better?

Not right now and the cheapest SSDs have really awful endurance compared to the cheapest hard drives, while stil being more expensive. Newer SSDs are cheaper and less reliable, MLC SSDs were nice, but I wouldn’t really want to use a QLC SSD unless it is enterprise-grade, but then it would be more expensive than a desktop-grade MLC SSD was a few years ago.
SSDs are faster though. But if I wanted to save as much money as possible, right now I would buy cheap HDDs - they are cheaper per TB and last longer when written a lot.
I do not know how it will be 10 or more years from now.

all true there is a write cycle life on SSD’s but really they shouldn’t fail catastrophic, if lets say i had a raidz with ssd’s then if the pool started to go bad… then it would just go into wait mode and stop all operations until one has cleared it to proceed.

also if we imagine an SSD loosing blocks, in 99% of cases it can isolate and ignore, only reducing the capacity, which is a much more gentle wear … curve … graph whatever it’s called.
without catastrophic fails the whole raid system ideology might be much less of a requirement in the future… maybe raid 5 is enough for ssd’s… bitrot might be much more of the issue or such… but again if it will isolate it, and uses crc then its most likely not much worse than hdds are anyways…

sure SSD are more expensive now, but that wasn’t my point… it was that printing stuff vs building advanced mechanics, printing will most likely win in cost of production, even if production cannot keep up at the moment, and thus they are “rare” and costly.

SSD tech is also still in its rise to power phase, i have no doubt it will take over basically everything the only question becomes when, if it hasn’t already…
and can my server support 200TB 3.5" SSDs x 12 bays
which is when i think about it… actually quite small 200TB i mean if one can have 60TB on 2.5" then certainly one can have 240TB or even more on a 3.5" lol
thats just riddiculous, sure today it would cost… lets seee… carry the 5 and liberal addition of zeros

36000$ per drive… ouch + whatever premiums also very rough estimate from just counting the TB price… i think regular hdd is like 21$ pr TB atleast here and then 5x that for SSD so it adds up lol

but yeah other technology may come change the game… Intels optane i hear is pretty enduring compared to MLC and QLC and what not

Some SSDs fail completely when they reach their endurance limit. They get bad blocks etc.
Here’s the fun part - all SSDs in your pool see the same amount of writes, so, it is very likely that all of them will reach the limit shortly one after another.

And that’s the difference. If I want to store a lot of photos etc, building an array (even RAID10) of HDDs would be much better. Oh, and for redundancy you need another drive to run in RAID1.

But I think we are off-topic with this. The suggestion is for software support of clusters, not SSD vs HDD.

true… kinda got lost in the speculation … xD
to get back to the point… i would say clustering of storagenodes, might require some sort of software.
but really 95% is down to regular VM clustering and running onto separate storage solutions running some kind of raid.

IMO, only issue so far as i see it, is like you mentioned a kernel panic in a VM would essentially shut it down, and then it would be practical if a storagenode could figure out utilizing multiple identities on multiple locations depending on if it detects that other local storagenodes crashed…

tho CentOS without to many bells and whistles is rock solid… and i suppose in case of a kernel panic one could set the IPMI to simply reboot the system using the watchdog features… i believe…

so yeah… i think all the stuff for it exists without software changes… or minimal in an case to take care of highly special cases of crashes maybe…