Like you said, the new node would be vetted first. If preexisting issues weren’t fixed, the node would never get through that vetting phase and the risk of data loss is minimized by this new node having much less data.
The reason the satellite can’t ever accept the old node anymore, is because it audits only a small fraction of the data on each node. Enough to determine whether the node lost data, but not nearly enough to know which data was lost. So the satellite has no idea which pieces to repair. The only safe way to proceed is to assume everything is lost and repair everything when it falls below the repair threshold.
Additionally, there have to be real consequences to losing data. If there aren’t, node operators become careless.
That said, you can vote for this idea.
There have been several suggestions in that conversation of how to prevent the node being disqualified in the first place if a mount point isn’t available. Hopefully they will take a feature like that under consideration. I even posted an example there of a script you could use to automatically kill your node on any audit failure. Now that one is very aggressive and probably shouldn’t be used. But perhaps it can be altered to something more usable.