No it’s not. You’re jumping to the worst case, so I’m jumping to the chance of that worst case happening based on the data we have. It’s completely fair.
I’m granting you a lot of stuff in may calculations. Annual failure rates are actually lower than the 2% I mentioned. Latest numbers show 1.89%. Instead of going down to 0.15% I gave you 0.2% monthly. I clearly stated that my calculation was for the cost in the worst month. Obviously there would be a returning part to that cost. But you would have to remove held amount from that calculation and it would be much more honest to go with average monthly income without surge payouts. Which my guess is would be closer to $20. So sure, $20 * 0.2% = 4 cents for every month after that. I’m sorry I didn’t mention those cents specifically. Lets also ignore the fact that you’ll have a new node up and running by then and those 3 cents are only really valid for 1 month.
Please notice that I’m also still going along with just having one node. Which you wouldn’t have in this scenario as you would be using the multiple HDD’s you would have used in RAID as separate nodes. And while that doesn’t change the cost calculation as you’re still dealing with the same averages, it does spread the risk. So that if a node fails, it will be one out of three in case of RAID5.
If you keep skipping over the chance of a bad event happening in your calculations, of course it sounds bad. But you’re blowing up the impact 50+ times (yearly basis) by skipping over that little detail. You also keep comparing a node on RAID vs a single node on a single HDD, which is not what anyone is advising you’d do.