But the rationale behind Hiroshima was that more lives in total would be saved by using it rather than not, even though I don’t doubt that allied soldiers were foremost in the minds of those who made the decision. It could therefore be classed as utilitarian.
The government has never been honest about collateral damage before, so I wouldn’t trust the projections they offer to the public. But I think we can infer from other policies that a nation’s politics aren’t utilitarian on an international scale. For example, immigration quotas and tariffs are hard to defend on a strictly utilitarian basis.
But is it possible to hope for a utilitarian outcome? That whoever has to make the decision says: I have a pretty good idea of what the toll will be if we don’t use it but I’m not sure of the toll if we do, but I’ll do it anyway and hope the books balance to back my decision.
Utilitarianism would acknowledge that the action was good if the consequences worked out. However, in my personal brand of utilitarianism, I wouldn’t advocate making “unconventional” decisions in the face of great uncertainty. I think it’s safer to fall back on rules that tend to maximize happiness in most cases when the results are difficult to predict, instead of what we hope for.
This reminds me of the St. Petersburg Paradox. Basically the paradox arises because a particular game is designed so that the expected payoff will be infinite, but it is only infinite because highly unlikely scenarios contribute to the rest of the calculation. Most people would not actually play the game for more than a few dollars, hence the paradox. Applying the greatest happiness principle to high risk but high reward scenarios may result in such risky, counterintuitive behavior, hence my suggestion that we disregard low probability payoffs.
But what happens if more people die of their injuries later on? Does it then become wrong?
Assuming a more or less deterministic universe (yes, I know it isn’t on the quantum level), actions never become wrong after the fact. They are right or wrong, and we don’t figure out which until later.
Another question is how far along the causal chain we should go. If I do something that enables someone else to behave a certain way, and then they enable someone else, and so forth until someone does something wrong, what portion of the blame do I share, if any?
I think problems like this can be avoided once we realize that utilitarianism is not concerned with the moral status of particular actions or people, but rather the big picture. It merely wants the greatest happiness at all times, regardless of how blame is distributed.