Is rational logic a proper tool of philosophy? Why? How?

  • Thread starter Thread starter grannymh
  • Start date Start date
Status
Not open for further replies.
Hah. Well that example is just a demonstration of how to prove a conjunction. I should also say that sentence logic can be done in first-order logic too. The above is the same as:

Hs
Nj​

Hs&Nj

I wouldn’t call the above useless. Like so many things, when it is shown so trivially the use can be lost. a+b=b+a sounds useless and trivial but it’s an important property of addition.

And yeah, I’m always glad to talk about what I’ve learned. (Provided everyone understands that I’m not a teacher) Philosophy is a hard subject and the nuances matter. And philosophy is done best as a group sport.
Hs is just a simple atomic sentence in first-order logic, like A is for sentence logic. Let’s call it… um. Sally is a Human. Nj is too, and… Um. we’ll say it means Jerry is a ninja.

So Hs&Nj is a compound sentence, Sally is Human and Jerry is a ninja. Small letters a-w represent names in first-order logic. X, y and are variables that we use with quantifiers. The quantifiers, and writing in the predicate-subject form is the only difference between first-order and sentence logic. (That I learned about)
 
Well without words defining or explaining your symbols, It becomes undecipherable and therefore unhelpful if not useless.
Yeah, sorry. When you’ve been doing it for awhile the logical relationships within sentences and between sentences start being enough to use, without giving content and context to it. I’ll show the domains from now on.
 
What I meant is that there are things that the empirical sciences have nothing to say about. A premise that human beings have two eyes is empirically grounded. Things like free will, goodness, modality, identity, etc. are not so easily and surely settled by being measured and observed. We use logic in science all the time - yes. But there is a level of justification that seems more sturdy in those matters. Logic becomes even more important for the matters that science cannot speak on, I think, to establish the premises we’re working from. Philosophers are always going to argue about premises. I think scientists too. They may dispute what the results of an experiment really means, etc.
I’d like to chime in with regards to your comment.🙂 This is from the U.S. Department of Health and Human Services:

State Level Organ Donation Laws

Advisory Committee on Organ Transplantation

OPTN Policies and Reports

Legislation and Legislative History

Timeline of Historical Events

Advisory Committee on Organ Transplantation

U.S. Department of Health and Human Services
Recommendations to the Secretary
LIVING LIVER DONOR INFORMED CONSENT FOR EVALUATION
I am being given the choice to undergo surgery to remove a part of my liver, which will be transplanted into a potential recipient.

In order for me to make this decision, I must understand enough about its risks and benefits to make an informed decision. This process is known as informed consent. This consent form provides information about the surgery that will be discussed with me. Once all my questions have been answered, I will sign this form showing that I am, of my own free will, choosing to donate a part of my liver.

I understand that I cannot receive any payment or anything of value if I agree to be a donor.

I am free to ask any questions and I am free to change my mind and remove my consent at any time.

SURGERY

Interrupted Surgery

The evaluation process of the potential donor and recipient does not stop when the surgery begins. It continues throughout the surgery. If at any point the surgical team believes that I am at risk or that the segment of my liver is not right for transplant, the surgery will be stopped. This happens in the United States at least 5% of the time.

. . .]
The poem on the previous page was written in 1927.😃
 
Lets continue with more examples.
How about this one (something I pulled from a web site):
The Argument from Consciousness
  1. Intelligence is part of what we find in the universe. But this universe is not itself intellectually aware. As great as the forces of nature are, they do not know themselves.
  2. We experience the universe as intelligible. This intelligibility means that the universe is graspable by intelligence.
  3. Either this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence, or both intelligibility and intelligence are the products of blind chance.
  4. Not blind chance.
  5. Therefore this intelligible universe and the finite minds are so well suited to grasp it that they must be the products of intelligence.
No, it not well formed, but maybe we can use logic to put it in a better logical structure.
 
Lets continue with more examples.
How about this one (something I pulled from a web site):
The Argument from Consciousness
  1. Intelligence is part of what we find in the universe. But this universe is not itself intellectually aware. As great as the forces of nature are, they do not know themselves.
  2. We experience the universe as intelligible. This intelligibility means that the universe is graspable by intelligence.
  3. Either this intelligible universe and the finite minds so well suited to grasp it are the products of intelligence, or both intelligibility and intelligence are the products of blind chance.
  4. Not blind chance.
  5. Therefore this intelligible universe and the finite minds are so well suited to grasp it that they must be the products of intelligence.
No, it not well formed, but maybe we can use logic to put it in a better logical structure.
Well the crux of the argument is (3) and (4), from which (5) follows by disjunctive syllogism. The argument is valid, though (4) is highly contentious. I don’t know how that would be shown to be true. I suspect (1) and (2) bring some background to the argument but they don’t seem necessary for the conclusion as such.
 
Let’s not be concerned with the content or acceptability of the statements; you did say that it has a structurally validity. Yes, I see that 1) & 2) are a preamble to set up the dichotomy in 3). How can I make a start at least?

There is intelligence = Ix (there are x intelligent things)
Intelligence is not chance = Nc
God makes intelligence = Gi

if there is intelligence and the intelligence is not by chance then God’s Intelligence made it.

Ix Nc⊃Gi

God knows I’ve tried, but How do you do this, really?
 
Let’s not be concerned with the content or acceptability of the statements; you did say that it has a structurally validity. Yes, I see that 1) & 2) are a preamble to set up the dichotomy in 3). How can I make a start at least?

There is intelligence = Ix (there are x intelligent things)
Intelligence is not chance = Nc
God makes intelligence = Gi

if there is intelligence and the intelligence is not by chance then God’s Intelligence made it.

Ix Nc⊃Gi

God knows I’ve tried, but How do you do this, really?
Um. Okay. Firstly the predicates you’ve set up are backwards. God makes Intelligence should be Ig. The predicate is the capital letter, and the variable or name is the lower case. I think for this argument all you really need is this:

Ix= X is the product of Intelligence
Cx= X is the product of chance
u= The universe.

Iu v Cu
~Cu​

Iu

The information in 1 and 2 of the argument might be rhetorically useful but I don’t see the logical connection in the logic that I’ve studied. (Recall I’m not the best at this so I might be wrong) It seems to me that what I’ve listed is the crux of the argument and what’s really important. Part of philosophy is taking arguments and getting the important parts out without all the window-dressing.
 
Sorry, it’s been a while, but for great reasons; we have family in town for a Baptism!

~ (not) . . . & (and) . . . | (inclusive or) - only “false | false” is false, other 3 combinations are true
. . .
antecedent ⊃ consequent (conditional, if-then, or implication) - false only when the antecedent is true and the consequent is false (if a.rain then c.wet, yet still true if sprinkler then wet also)
. . .
≡ (Biconditionals, if-and-only-if statement) - true when both atomic sentences connected are true, or both false (Socrates is mortal if and only if Socrates is a human" can be “M≡H”. M is true - Socrates is mortal. H is true - Socrates is a human. So, the compound sentence is true. “Santa Claus is real if and only if I live on Mars” is also true, because S≡R is the symbolization and both S and R are false. “Socrates is mortal if and only if I live on Mars” is false, the first part about Socrates is true, but the part about me is false)
. . .
∃ (existential quantifier) - states that something exist. (∃xPx (exists an x that is a Peach that is x))
. . .
∀ (everything, universal qualifier) -∀xMx ‘for all Xs, X is made of matter.’

OK, “v” = “exclusive or” is not in our cheat sheet, but great that was a simple example that brought out more of how to use the Capital as Predicate and second lower case letter as the subject.
 
Sorry, it’s been a while, but for great reasons; we have family in town for a Baptism!

~ (not) . . . & (and) . . . | (inclusive or) - only “false | false” is false, other 3 combinations are true
. . .
antecedent ⊃ consequent (conditional, if-then, or implication) - false only when the antecedent is true and the consequent is false (if a.rain then c.wet, yet still true if sprinkler then wet also)
. . .
≡ (Biconditionals, if-and-only-if statement) - true when both atomic sentences connected are true, or both false (Socrates is mortal if and only if Socrates is a human" can be “M≡H”. M is true - Socrates is mortal. H is true - Socrates is a human. So, the compound sentence is true. “Santa Claus is real if and only if I live on Mars” is also true, because S≡R is the symbolization and both S and R are false. “Socrates is mortal if and only if I live on Mars” is false, the first part about Socrates is true, but the part about me is false)
. . .
∃ (existential quantifier) - states that something exist. (∃xPx (exists an x that is a Peach that is x))
. . .
∀ (everything, universal qualifier) -∀xMx ‘for all Xs, X is made of matter.’

OK, “v” = “exclusive or” is not in our cheat sheet, but great that was a simple example that brought out more of how to use the Capital as Predicate and second lower case letter as the subject.
There are a bunch of other logical connectives, such as the ‘exclusive or’, but I’ve no experience working with them really. I do know that ‘exclusive or’ requires the disjuncts to have different truth-values for the disjunction to be true, though.

The only other thing that looks out place is your truth tables for ~ and &, but that might be because I’m confused. So I’ll recapitulate.

Negation flips the truth-values of a proposition. So, if P is true, ~P is false. If ~P is true, then P is false.

Conjunction requires both conjuncts to be true for the conjunction to be true. A&B is true only when both A and B are true. If either A or B is true, then the conjuction A&B is false.
 
Digging for more material to practice upon I ran into the following:

Logic Exercises (for Freshman Composition)

The Three Laws of Robotics
  1. A robot may not harm a human being, or through inaction, allow a human being to come to harm.
  2. A robot must follow the orders given it by a human being except where such orders would conflict with the First Law.
  3. A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.
    – Handbook of Robotics, 56th Edition, 2058 AD
    (via the science-fiction author Isaac Asimov)
You are a robot, programmed according to the Laws of Robotics, as stated above. On a piece of scratch paper, sketch out your logical reactions to the following situations:
#1. A huge tree is about to fall on a child playing on the other side of the street. A crossing guard is holding a “Stop” sign at you, preventing you from getting to the child. What do you do? Explain every step of your reasoning.
#2. The situation is the same as in #1, except now you realize that the tree will crush you if you try to save the child. In fact, you’ll be crushed before you can even get to the child. Based on these rules, what do you do?
#3.In a completely different situation, your owner orders you to jump in front of a speeding bus.
#4.The situation is the same as in #3, except now your owner’s ex-girlfriend is on the bus, you are a huge industrial robot with glittering tritanium armor plates, and you weigh twice as much as the bus does.
#5. Same as #4, except now the bus is about to run over your owner.
 
Digging for more material to practice upon I ran into the following:

Logic Exercises (for Freshman Composition)

The Three Laws of Robotics
  1. A robot may not harm a human being, or through inaction, allow a human being to come to harm.
  2. A robot must follow the orders given it by a human being except where such orders would conflict with the First Law.
  3. A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.
    – Handbook of Robotics, 56th Edition, 2058 AD
    (via the science-fiction author Isaac Asimov)
You are a robot, programmed according to the Laws of Robotics, as stated above. On a piece of scratch paper, sketch out your logical reactions to the following situations:
#1. A huge tree is about to fall on a child playing on the other side of the street. A crossing guard is holding a “Stop” sign at you, preventing you from getting to the child. What do you do? Explain every step of your reasoning.
#2. The situation is the same as in #1, except now you realize that the tree will crush you if you try to save the child. In fact, you’ll be crushed before you can even get to the child. Based on these rules, what do you do?
#3.In a completely different situation, your owner orders you to jump in front of a speeding bus.
#4.The situation is the same as in #3, except now your owner’s ex-girlfriend is on the bus, you are a huge industrial robot with glittering tritanium armor plates, and you weigh twice as much as the bus does.
#5. Same as #4, except now the bus is about to run over your owner.
This looks more like what I would call critical reasoning than an exercise in formal logic. The connection between logic and critical reasoning is pretty clear I think. Um… we could parse everything up into symbolic logic but I don’t think that’s necessary? I think an informal explanation of each answer is all that’s required. There’s an ambiguity at play with the word “logic” these days. So far all I’ve been discussing is formal philosophical logic. Not the broader sense o suspect the above means.

I think the above shows that sometimes an appeal to formal symbolic logic makes things more complicated. #1, 3 and 4 have a clear answers. #2 is tricky. #5 leads to a paradox.
 
Yes, this is a composition exercise and more designed around “critical thinking”, but with hard and fast rules being applied in a robotic manor I thought we could explore first the simple then the paradoxical problems in a formal logical manor to see what happens.

The test is how the student resolves conflicts and whether he/she does so consistently.
I would add the following “rule” to resolve any internal conflict within a rule:

When action or inaction is equally forbidden by one of the rules it is void and only the remaining laws shall be used to determine the robot’s action or inaction.

(I’m also keeping it a simple where it would only compare any human harm vs. any human harm. The Robot is not to regard multiple lives lost as greater human harm than one human’s harm. Doing the opposite may be another consistent approach, but as I said we’re keeping the robot’s decisions as simple as possible.)

Then there is the possibility of the third choice of non-inaction and disobeying the human. Do we assume the Robot is incapable of saving the owner without stopping the bus? Also, do we assume the Robot is capable of arriving at a third option on it’s own?
 
Yes, this is a composition exercise and more designed around “critical thinking”, but with hard and fast rules being applied in a robotic manor I thought we could explore first the simple then the paradoxical problems in a formal logical manor to see what happens.

The test is how the student resolves conflicts and whether he/she does so consistently.
I would add the following “rule” to resolve any internal conflict within a rule:

When action or inaction is equally forbidden by one of the rules it is void and only the remaining laws shall be used to determine the robot’s action or inaction.

(I’m also keeping it a simple where it would only compare any human harm vs. any human harm. The Robot is not to regard multiple lives lost as greater human harm than one human’s harm. Doing the opposite may be another consistent approach, but as I said we’re keeping the robot’s decisions as simple as possible.)

Then there is the possibility of the third choice of non-inaction and disobeying the human. Do we assume the Robot is incapable of saving the owner without stopping the bus? Also, do we assume the Robot is capable of arriving at a third option on it’s own?
Formal logic is used for analyzing arguments. I don’t know how well it’ll lend itself to this. However logical concepts are something we use all the time. I think in this instance, entailment is playing a big part. Entailment holds between statements when one ‘logically follows’ from another. It is a strong relationship between statements. This is what gives logic the ‘if the premises are true, then the conclusion must be true’ quality. For instance, consider this set-up:

Sx= X is shining
Bx= X is blue
s= the sun
k= the sky

Using this domain, we can say some things. Like this.
1)Ss
2)Bk

The above state “the sun is shining” and the second line states “the sky is blue.” Now, those two statements together entail the statement Ss&Bk - the sun is shining and the sky is blue. (I know, it’s trivial, but it shows the example) Here’s another
  1. ~Ss⊃~Bk
  2. Bk
So the above states “if the sun is not shining, then the sky is not blue” and it also states “The sky is blue.” Together the statements entail another statement: “the sun is shining.” We use this idea all the time. If we wake up and see the car not in the driveway, then we assume that someone else is using it. The latter is entailed by the former. So if we take our Robot Rules as givens, and a situation as a given, we can see what results are entailed. For example, in #1 I think that the robot saving the child is entailed by the rules and situation. The robot must save the human, and that overrides the command to stop. I think #5 is the only example that leads to a paradox because no matter what happens, a human’s life is in danger. If the robot stops the bus, then people will be hurt on the bus but if the robot doesn’t stop the bus, then the owner will be hurt.
 
I’ve taken long enough, you’ve probably are loosing interest; sorry, but I’ve had to restart working on this a number of times and intended to add a statement of voiding any rule that was internally conflicting, but thought I better get this much out and see if you think this is a good representation in first order logic. I struggled with whether or not the first law should be conditional or bi-conditional.

Copy of Asimov’s fictional three laws of robotics:
  1. A robot may not harm a human being, or through inaction, allow a human being to come to harm.
  2. A robot must follow the orders given it by a human being except where such orders would conflict with the First Law.
  3. A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.
    – Handbook of Robotics, 56th Edition, 2058 AD
The Three laws restated in first order logic:

F = Follows action of command
T = Takes action to prevent any human’s harm
H = harm
h = humans
P= takes action to prevent own harm
r = robot
  1. Tr ≡ Hh (Robot takes action to prevent any human’s harm if and only if a human will be harmed)
  2. Fr ≡ ~Hh (Robot follows human command if a human will Not be harmed)
  3. Pr ≡ ~Hh (Robot takes action to prevent own harm if only if a human will Not be harmed)
 
I’ve taken long enough, you’ve probably are loosing interest; sorry, but I’ve had to restart working on this a number of times and intended to add a statement of voiding any rule that was internally conflicting, but thought I better get this much out and see if you think this is a good representation in first order logic. I struggled with whether or not the first law should be conditional or bi-conditional.

Copy of Asimov’s fictional three laws of robotics:
  1. A robot may not harm a human being, or through inaction, allow a human being to come to harm.
  2. A robot must follow the orders given it by a human being except where such orders would conflict with the First Law.
  3. A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.
    – Handbook of Robotics, 56th Edition, 2058 AD
The Three laws restated in first order logic:

F = Follows action of command
T = Takes action to prevent any human’s harm
H = harm
h = humans
P= takes action to prevent own harm
r = robot
  1. Tr ≡ Hh (Robot takes action to prevent any human’s harm if and only if a human will be harmed)
  2. Fr ≡ ~Hh (Robot follows human command if a human will Not be harmed)
  3. Pr ≡ ~Hh (Robot takes action to prevent own harm if only if a human will Not be harmed)
So long as people are happy to hear what I’ve got to say, I’ll be happy to say it. 🙂

So, I’m sure the rules can be written into logic. That’s what programming logic is a thing. It’s not like the arguments I’m parsing through, but, I can try to regiment the sentences into first-order logic at least. There are a couple book-keeping mistakes from your domain though that I need to address.
  1. Predicates need to be listed in “Ax” or “Axy” or “Axyz” form, depending on how many places a predicate is. (I don’t know if I talked about multi-placed predicates so I’ll discuss them now) A one-place predicate lists a property, usually. Tx=X is tall. Multi-place predicates usually express a relationship. Bxy= X is bigger than Y. And then three-place predicates might be, like, Txyz= X is between Y and Z.
  2. “Names”, which are the a-w constants, need to be singular. For instance, h=Harry is fine. H=humans is not, because what you are saying is ALL humans. This requires a universal quantifier to express. So in your domain, h could be one certain human and r could be one certain robot. But if we’re discussing more than just that singular defined object, we need to use a variable and a quantifier.
All that being said, here’s my crack at regimenting the sentences into first order logic. I make no claims about the correctness of my translation - but I can safely say that the translation would look similar to what I come up with. Someone who’s better at logic can clean up my symbolization. They probably won’t be elegant translations either. A proper logician can fix this up. I’ll give the translation back to English under it.

Domain: All things.
Rx=X is a robot
Hx=X is a human
Axy=X may harm Y
  1. (∃x∃y(Rx&Hy) & ∀x∀y (Rx&Hy)&(Rx⊃~Axy)])
    There exists a robot and a human, and, for all robots and humans, if something is a robot it is not the case that it may harm humans.
Erm, 2 and 3 are more difficult. They’ll take multiple sentences to regiment. Or, at least, longer more complex sentences.
 
Multi-place predicates are a powerful part of the logic language. Thanks for introducing it.

This is still showing how far I need to go. Much as in my English language writing I expect too much out of too few symbols or words.
That is quite a good deal of information in that example.

Yet, maybe a few more smaller steps are needed before heading to 2) or 3).
 
Translation was always the hardest part for me. Especially for complex arguments. Mostly because improper translations make deductions using he symbols next to impossible.
 
My modem has been giving out and finally got a new one in the mail. I was willing to go two miles to buy one but was vetoed for the cost of the 2 dollars more + gas difference.
We were really getting upset having to reset the old one to get it to work for 15 to 40 minutes at a time.

Now, that it’s working well the whole family is using it. I needed to get on via my iPad. I’m noticing how clunky it’s getting. Settings says it’s up to date with OS version 5.1.1 while I’m hearing that OS 8 is going to come out soon. Can you imagine GM trying to say, “I’m sorry your 2007 car is not going to get software updates to run the available fuel injectors.” Lucky I had four years ago loaded a non-safari browser that doesn’t drop out on any websites with newer controls on it. This site though has not changed so safari would likely be fine with it.

Anyway, Let me ask more about how to use multi predicates and maybe you can gin up something about them to put in our cheat sheet.
 
Sure. I’ll give you a brief outline.

So a predicate in logic (and maybe in natural language too, I’m not a linguist) organizes information. Bill is tall puts the object, Bill, in relation to the property “tall.” We represent that with the first-order logical symbolization “Tb”, and, his signifies also that Bill is in the set of tall things. Some properties are relations, so, that’s where multi-subject predicates come in. Bill is taller than Steve can be symbolized as “Tbs”, for instance. (depending on how we set up our domain) These are ordered-pairs, which means that the order of the objects matter. The predicate ‘taller than’ uses ordered pairs to use. X is taller than Y. You can put anything in for X and Y and it’ll give you an intelligible sentence that can be tested for truth.

We stipulate these predicates. We can take them out to ordered triplets (X is between Y and Z) , ordered quadruplets (X is between Y and Z but north of A) and so on. All that matters is how we define our predicate. In studying meta-ethics, a philosopher named TM Scanlon argues that to have a reason was a complex four-place predicate for instance. So, I’ll leave it at that and see if I can answer questions.
 
Status
Not open for further replies.
Back
Top