An Ultimate backup! Parts on all nodes!

It could be zero. Depends on how many healthy pieces for the segments, the repair job got triggered when the number of healthy pieces is below the configured threshold (56 at the moment). Your node contains only one piece from 80 for the segment.
Otherwise it will cost as described in the post above.

May be. But this should be well modeled. The current model is 80/29. Lowing redundancy without proper investigation is too risky, especially if account nodes who trying to bypass /24 limits.
Right now 60 is too close to the minimum health threshold of 56 nodes, so we likely will be forced to repair even a new data much earlier, than with 80 pieces, which may never happen otherwise (the customer may delete their data way before the segment reach a threshold for repair).

I do not know. But I would like to read ideas on that topic.
There are usually at least two solutions - make it economically not viable, or apply a technical solution. Lowering prices may be not a good way to solve it, but VPN is not free, so less profit but with more headache shall reduce the number of such nodes.
The technical solution could be

To share an unused space and bandwidth on already online and paid hardware, where is literally any income is a pure profit and nice discount to existing bills. The whole idea is to do not do any investments in a first place. So, if you decided to invest - it’s your responsibility now.

Unfortunately it will not prevent from breaking /24 subnet limit, it’s likely reverse: you are paid more, if you store more. And since egress is not paid, there is no incentive to have a good upstream bandwidth, you technically would need only good downstream bandwidth to allow customers to fill your drives.
Combined with the customers desire to use a free egress it will end in conflict: they would not be able to use this free egress, because backing nodes will throttle the egress, so they could use our cloud as a cold backup only. Low revenue without a paid egress with high risk to lose data if we also implement your suggested lower redundancy.
Removing /24 subnet limit will make the situation even worse - now you will have several pieces of the same segment in the same physical location. So, if your hardware is off for any reason, the whole segment is in danger. Thus probability to lost files and then customers will become a much higher.

no, it doesn’t, more like it will be exploited even more often. We saw this in V2, so it likely will repeat.
Maybe I didn’t get it, how do you want to prevent placing more than a one piece of the same segment to the same physical location.
If we would use the current node selector - one node from /24 subnet, then this is exactly the limit which we have now.