I see it differently. Several rsync passes in the script take up none of my time or attention. Then it would email me when it’s done. Now I have a few days to come back and press enter in the script, it would stop the node and run last rsync pass. And then i would swap the disk.
With downtime and cloning you have to pretty much babysit the whole process. It completes faster but takes more of your time (connecting disks (maybe you don’t have extra port on the source server), running cloning software, reconfiguring disks later), it’s time sensitive — you have to be around by the time cloning finishes, so it interferes with your other plans by creating these two connected points in time, and results in down time. It’s worse in every respect.
It’s ok if cloning can finish in four hours and rsync takes five weeks. Because you are not involved in those five weeks at all.
Heh. In ~/MyPetProjects — maybe. Not in production.
At some point, writing ultra reliable, ultra boring code becomes more exciting than jumping on every shiny new thing. Because stability for years without regressions is way more exciting that any possible fancy new features. New features are (often un-) necessary evil — they destabilize everything by necessity.
You can look at duplicacy and kopia backup programs as illustration. Duplicacy is barely evolves, it’s simple, and solid like a glacier. Kopia exists solely to tickle its developer ego. It shall be in Wikipedia article on feature creep. And it’s unstable, corrupts datastore, and is in permanent pre-alpha for years. Cool features but nobody wants to use it.