Are there absolute moral axioms?

  • Thread starter Thread starter Charlemagne_III
  • Start date Start date
Status
Not open for further replies.
People often debate issues in their own mind and at one point see A as the best choice, but at another point see that it is better not to choose A.
Yes, but following Aristotle, not at the same point. šŸ˜‰
 
So according to you, if she does kill her own child, she would be choosing the lesser evil.
.

Correction. It is not Sophie who kills her own child. It is the butcher. She is choosing to save the life of the other child, or else they will both be killed.
 
I said it was a slight modification of Sophieā€™s choice. I donā€™t think that my modification in any way affects my contention that there are cases where some people would choose the action which results in the greater of two evils.
I donā€™t doubt that this is very possible. But then you are describing not what *should have *happened, but what did happen. These people should have chosen the lesser evil.

People who choose the greater of two evils do not see clearly or are disposed to evil.

I remind you that throughout this thread when I referred to universal assent, I meant the universal assent of reasonable people.
 
How so? Sophie should not save the life of at least one of her children by choosing?
Sophieā€™s choice is a good example of a moral dilemma. Different people will approach it differently. I may come back to it later, but at this point in time, I donā€™t have too much more to contribute to what I have already said about this particular question.
 
Sophieā€™s choice is a good example of a moral dilemma. Different people will approach it differently. I may come back to it later, but at this point in time, I donā€™t have too much more to contribute to what I have already said about this particular question.
Iā€™m also out of gas. šŸ‘
 
The original scenario that Kant addressed was the question of what you would do if a known murderer asked you for the location of his next victim. I wanted more than one life to be at stake in the question, and the Gestapo example is a classic, so I went with that. I doubt that has any impact on Kantā€™s answer.

And to address Nihilistā€™s concerns, the following is from Wikipedia. I would just quote the passage from Kantā€™s work, but alas, I only own Critique of Pure Reason:

I emphasize the bolded portion. Kant truly believed that lying was absolutely, irrevocably wrong in every conceivable scenario. No ifs, ands, or buts about it. Since itā€™s always wrong to lie, even to prevent murder, then for all intents and purposes, Kant effectively believed that lying is just as bad as murder. There is no ā€œlesser evilā€ that becomes permissible in the presence of a greater evil in his framework. I will remind you that Kant has a considerable number of fans.

So you have a choice: Either your morals lack universal assent, or you need to invent an ad hoc reason to dismiss Kant and his fans.
Thanks for referring to that work. I was not familiar with it. I will have to read it.
 
I emphasize the bolded portion. Kant truly believed that lying was absolutely, irrevocably wrong in every conceivable scenario. No ifs, ands, or buts about it. Since itā€™s always wrong to lie, even to prevent murder, then for all intents and purposes, Kant effectively believed that lying is just as bad as murder. There is no ā€œlesser evilā€ that becomes permissible in the presence of a greater evil in his framework. I will remind you that Kant has a considerable number of fans.

So you have a choice: Either your morals lack universal assent, or you need to invent an ad hoc reason to dismiss Kant and his fans.
Perhaps you could refresh my memory. Does Kant ever talk about ā€œlesser evilsā€?

If so, where?
 
Could it be that in cases where we have to make a choice, then there is no moral component to the situation? You are simply making a practical decision.

If there is a choice involved ā€“ Do I steal? Do I cheat? Do I kill? ā€“ then there is a moral choice to be made.

In Sophieā€™s Choice, Sophie is not doing something evil by choosing. But an evil act has been done in that the person has made her choose. He was the one that had the choice to force her or not. His was moral choice, not hers.

Similarly, if you have to shoot the guy trying to get into the lifeboat, itā€™s not a moral dilemma. Itā€™s purely practical. And having a late term abortion? Itā€™s a choice and a moral consideration. Saving the life of the pregnant ten year old? A practical matter.
 
Perhaps you could refresh my memory. Does Kant ever talk about ā€œlesser evilsā€?

If so, where?
Iā€™m not sure if he ever explicitly mentions them. However, it seems to me that he cannot have admitted them in his philosophy. The whole idea of a lesser evil is that, when some evil is necessary, we try to ā€œminimizeā€ the evil committed, assuming that our morality admits gradation of evil.

Kant may have recognized some evils as worse than others, but hereā€™s the rub: There are no necessary evils in Kantā€™s framework. Remember that, for Kant, ā€œought implies canā€ and following the Categorical Imperative is never evil. Thus a person is always capable of doing what they ought to do without doing wrong (since compliance with the Imperative is, by definition, good). This is not interpreted as doing the thing that is ā€œless wrongā€, itā€™s simply not doing wrong at all.
Benjamin Constant critiques Kantā€™s categorical imperative about lying by raising the instance of the murderer who requests the location of his next victim. Are we to tell him the truth about the next victimā€™s location because we must always tell the truth?

Here is Kantā€™s reply:

bgillette.com/wp-content/uploads/2011/01/KANTsupposedRightToLie.pdf

What do you think?
Honestly I just disagree with the whole premise of deontology. Take the Categorical Imperative as an example. Kant was disappointed that the philosophers of his day only argued on the basis of what he called ā€œhypothetical imperativesā€, which prescribe certain actions in certain situations. He felt that this made morality too subjective and that it would be difficult to persuade people to behave in a certain manner based on morals that are made on a case-by-case basis.

Kantā€™s solution was the Imperative, which says that you should act in a manner that you would be okay with universalizing. So, for example, lying is deemed wrong because, if everyone lied, the ability to lie effectively would break down since there would be no trust. By stripping actions of their contexts, his imperative is categorical, not hypothetical.

But the obvious question is: Why should I be willing to universalize my actions? Not everyone is in the same situation Iā€™m in, so how could I recommend my strategy for living my life to everyone? Kant gives us the hint that he believes acting in accordance with the Imperative would be in societyā€™s best interest. But this seems dishonest, since Kant just told us that ignoring consequences is a good idea. Now heā€™s using them to justify his Imperative? Sounds fishy.

Personally, I agree with the poster (I think it was Bradski) who said that the moral ā€œWe should prevent needless human sufferingā€ works in most cases. I would generalize it to all sentient beings, actually, not just humans. Combine this with your moral that we should commit only lesser evils and, voila, you have the basic framework of utilitarianism, which advocates minimizing suffering (or maximizing happiness). It could do with some fine-tuning, but the basic premise of utilitarianism is pretty practical, I think.
 
In Sophieā€™s Choice, Sophie is not doing something evil by choosing. But an evil act has been done in that the person has made her choose. He was the one that had the choice to force her or not. His was moral choice, not hers.
Good.

Actually, though, she made a moral choice because her choice resulted in a good act, the saving of one child. Both good acts and bad acts are moral acts.
 
Kant may have recognized some evils as worse than others, but hereā€™s the rub: There are no necessary evils in Kantā€™s framework. Remember that, for Kant, ā€œought implies canā€ and following the Categorical Imperative is never evil. Thus a person is always capable of doing what they ought to do without doing wrong (since compliance with the Imperative is, by definition, good). This is not interpreted as doing the thing that is ā€œless wrongā€, itā€™s simply not doing wrong at all.
If Sophie had refused to choose, both children would have died. So she saved a life by choosing the lesser evil. Is that not a morally good act?
 
If Sophie had refused to choose, both children would have died. So she saved a life by choosing the lesser evil. Is that not a morally good act?
For the life of me, I cannot accept that choosing the child to die was an immoral act. I cannot accept that she was doing something wrong at that point. Therefore, I cannot accept that choosing the child to live could in any way be described as something good on her part. I donā€™t you can class either as a moral decision.

What if it was one child and the person said I am going to shoot the kid or burn her alive. Can you in any way describe the lesser of the two evils or morally good?
 
Personally, I agree with the poster (I think it was Bradski) who said that the moral ā€œWe should prevent needless human sufferingā€ works in most cases. I would generalize it to all sentient beings, actually, not just humans. Combine this with your moral that we should commit only lesser evils and, voila, you have the basic framework of utilitarianism, which advocates minimizing suffering (or maximizing happiness). It could do with some fine-tuning, but the basic premise of utilitarianism is pretty practical, I think.
I also think Kant did not take choosing the lesser evil into consideration, as I canā€™t seem to find it discussed anywhere. Did he avoid the whole question because it didnā€™t fit his categorical imperative paradigm? Benjamin Constant tried to force him into a corner, but Kantā€™s answer seem not satisfactory to me.

I think the ā€œlesser evilā€ dilemma does border on a utilitarian approach at times when numbers are involved. But I agree with Kant that utilitarianism is a very limited and subjective approach to ethics that skirts the question of universal axioms. Some acts just plainly should or should not be done, regardless of how many people are counted in the greater good or happiness of the greater number. And how on earth would anybody know for certain that by any specific act the greater good of the greater number had been served? Is this morality by public polling? šŸ¤·
 
For the life of me, I cannot accept that choosing the child to die was an immoral act. I cannot accept that she was doing something wrong at that point. Therefore, I cannot accept that choosing the child to live could in any way be described as something good on her part. I donā€™t you can class either as a moral decision.
She prevented the murder of one son by making a horribly courageous decision. That is not a moral act?

I think it would have been an immoral decision to not make a choice, since both children would have been executed.

Iā€™m sticking with the lesser evil moral axiom. šŸ˜‰
 
She prevented the murder of one son by making a horribly courageous decision.
No doubt.
That is not a moral act?
I canā€™t see it as such, no.
Iā€™m sticking with the lesser evil moral axiom. šŸ˜‰
Then weā€™d probably end up doing the same thing, except weā€™d describe it differently. But I guess thatā€™s the point of the exercise - to reach agreement on the description of what we are actually doingā€¦
 
If Sophie had refused to choose, both children would have died. So she saved a life by choosing the lesser evil. Is that not a morally good act?
As I said, Iā€™m not entirely sure how Kant would have handled it. I do not agree with Kant, I am just using him as a counterexample to the claim that these morals are universally accepted. In fact, I think deontological approaches to morality in general disagree with the notion of ā€œlesser evilsā€ because they do not allow moral considerations to depend on consequences or circumstances.

If you choose a lesser evil to prevent a greater evil, you are using evil as a means to an end. Thatā€™s frowned upon in any deontological approach that I know of, although itā€™s perfectly acceptable by consequentialists.
I think the ā€œlesser evilā€ dilemma does border on a utilitarian approach at times when numbers are involved. But I agree with Kant that utilitarianism is a very limited and subjective approach to ethics that skirts the question of universal axioms.
I donā€™t see what you mean by saying it avoids asserting universal axioms. For example, the assertions that happiness is good and that actions should be judged by their impact on happiness are axioms of utilitarianism. You may state these slightly differently and prove their variants as ā€œtheoremsā€ of utilitarianism, but the basic idea that happiness is worth maximizing is assumed as a universal rule.

I concede that the actual application of utilitarianism may be a bit subjective, because happiness is a vague notion. Some forms of utilitarianism are more specific about which aspects of happiness should be maximized and how such a thing should be done. Personally, I avoid being too specific about happiness because it is, after all, a multi-faceted thing, and we risk oversimplifying it when we make specifications.
Some acts just plainly should or should not be done, regardless of how many people are counted in the greater good or happiness of the greater number.
I agree that utilitarianism doesnā€™t perfectly coincide with all of our moral intuitions. Frankly Iā€™ve yet to find a philosophy that does. I think the reason is that all too often our intuitions are inconsistent, so no consistent system will ever perfectly capture how we truly ā€œfeelā€ about whatā€™s right and wrong. Intuitions contradict each other all the time when you really examine them.
And how on earth would anybody know for certain that by any specific act the greater good of the greater number had been served? Is this morality by public polling? šŸ¤·
Itā€™s hard to be certain when the numbers are close, but usually itā€™s pretty obvious which actions bring about more happiness. Polling is, in my opinion, a great thing for morality. Morals are interested in human concerns, and what method is more efficient at ascertaining the issues we face as a society than polling?
 
Itā€™s hard to be certain when the numbers are close, but usually itā€™s pretty obvious which actions bring about more happiness.
Try this one:

I worry less about Rokoā€™s Basilisk than about people who believe themselves to have transcended conventional morality. Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. He has explicitly argued that given the choice, it ispreferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes. sbs.com.au/news/article/2014/07/18/comment-most-terrifying-thought-experiment-all-time
 
Try this one:

I worry less about Rokoā€™s Basilisk than about people who believe themselves to have transcended conventional morality. Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. He has explicitly argued that given the choice, it ispreferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes. sbs.com.au/news/article/2014/07/18/comment-most-terrifying-thought-experiment-all-time
There are many criticisms of utilitarianism along those lines. But Iā€™ve yet to see a plausible scenario in which torturing a few innocent people makes a large population happy in a way that couldnā€™t be accomplished without the torment.

If you need an outrageous thought experiment to make a moral code look bad, then that moral code is actually pretty good.
 
Status
Not open for further replies.
Back
Top