My approach to migrating nodes (or anything else for that matter that is fragile or complex or labor intensive) is to never do anything manually; instead, to write a script that would do it for me and test it on some inconsequential stuff, in this case, small temporary node.
For example, this is how I migrated my node to a new dataset.
I had to rebalance data on the array when adding another vdev; the process is equivalent to migrating node to another storage. This is how my script does it. I have used it already 4 times on different machines, and once across 2000 miles, sending a small node to offsite location, with a small modification, piping through the ssh link.
#!/usr/local/bin/zsh
# Abort on error
set -e
dataset=pool1/storagenode-one
jail=storagenode-one
tmp_target=pool1/target
echo "Copying first snapshot"
zfs snapshot -r ${dataset}@cloning1
zfs send -Rv ${dataset}@cloning1 | zfs receive ${tmp_target}
echo "Copying second snapshot"
zfs snapshot -r ${dataset}@cloning2
zfs send -Rvi ${dataset}@cloning1 ${dataset}@cloning2 | zfs receive ${tmp_target}
echo "Copying third snapshot"
zfs snapshot -r ${dataset}@cloning3
zfs send -Rvi ${dataset}@cloning2 ${dataset}@cloning3 | zfs receive ${tmp_target}
echo "Stopping node"
iocage stop $jail
echo "Copying fourth snapshot"
zfs snapshot -r ${dataset}@cloning4
zfs send -Rvi ${dataset}@cloning3 ${dataset}@cloning4 | zfs receive ${tmp_target}
echo "Renaming datasets"
zfs rename ${dataset} ${dataset}-old
zfs rename ${tmp_target} ${dataset}
echo "Starting node"
iocage start $jail
echo "Press enter to destroy old dataset"
read
zfs destroy -r "${dataset}-old"