How do you know?

  • Thread starter Thread starter Solmyr
  • Start date Start date
Status
Not open for further replies.
The Turing test is deficient, as shown by Searle in his famous Chinese Room Argument.

plato.stanford.edu/entries/chinese-room/
The Chinese Room argument has flaws.
"Artifical Intelligence, a Modern Approach" (Second Edition)
Stuart Russell and Peter Norvig

…]
Now we are down to the real issues. The shift from paper to memorization is a red herring, because both forms are simply physical instantiations of a running program. The real claim made by Searle rest upon the following four axioms (Searle, 1990)

  1. *]Computer programs are formal, syntatic entities
    *]Minds have mental contents, or semantics
    *]Syntax by itself is not sufficient for semantics
    *]Brains cause minds

    From the first three axioms he concludes that programs are not sufficient for minds. In other words, an agent running a program might be a mind, but it is not necessarily a mind just by virtual of running the program. From the fourth axiom he concludes “Any other system capable of causing minds would have to have casual power (at least) equivalent to those of brains.” From there he infers that any artifical brain would have to duplicate the casual powers of the brain, not just run a particular program, and that human brains do not produce mental phenomena solely by virtue of running a program.

    The conclusion that programs are not sufficient for minds does follow from the axioms, if you are generous in interpreting them. But the conclusion is unsatisfactory - all Searle has shown is that if you explicitly deny functionalism (that is what his axiom (3) does) then you can’t necessarily conclude that non-brains are minds. This is reasonable enough, so the whole argument comes down to whether axiom (3) can be accepted. According to Searle, the point of the Chinese room argument is to provide intuitions for axiom (3). But the reaction to his idea that mere programs cannot generate true understanding.

    To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does this be exhibiting an apparently intelligent system running the right sort of program that is, according to Searle, demonstrably not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s thre to be a mind? But one could make the same argument about the brain:just look at this collection of cells (or of atoms), blindly operating according to the laws of biochemistry (or of physics) - what’s there to be a mind? Why can a hunk of brain be a mind while a hunk of liver cannot?

    Furthermore, where Searle admits that materials other than neurons could in principle be a mind, he weakens his argument even further, for two reasons: first one has only Searle’s intuitions (or one’s own) to say the Chinese room is not a mind, and second, even if we decided the room is not a mind, that tells us nothing about whether a program running on some other physical medium (including a computer) might be a mind.

    Searle allows the logical possibility that the brain is actually implementing an AI program of the traditional sort - but the same program running on the wrong kind of machine would not be a mind. Searle has denied that he believes that “machines cannot have minds,” rather, he asserts that some machines do have minds - humans are biological machines with minds. We are left without much guidance as to what types of machines do or do not qualify.

    By coincident I’m about to take a stab at using Microsoft’s Cognitive Services. They just released them at the //build/ conference last week. Not that I care whether my results are coming from a machine that understands, a machine that doesn’t understand, or very fast working human beings working in a room so long as the results are mostly appropriate for my needs.
 
Ironically, the suggestions given all contribute to a potential sophistication of the OT machines that would be imitators of something. For me the answer is a little more complex. Where two or three are gathered in his name there he is in their midst. How could one get a sense of Him from conversation with a machine? Yet granting that the intention is good in creating the machine’s dialogic capacity, there still remain certain intangible issues that seem impossible to “fake”. Unfortunately, this discussion leads to the erroneous proposition/conclusion that he who can discern the lord correctly in such a dialogue is more advanced spiritually than he who cannot. Are we ready to call the mathematically uninclined simpletons by comparing them to those who can supposedly read a soul’s presence vs. a machine’s imitation of it?
Perhaps considering factors in real relationshps is a starting point. Is the conversation with the entity dominated by one party or the other? Is the conversation honest? Is the entity capable of being transparent with regard to feelings and intentions?
Example:
Lots of talk about upcoming events …
What am I feeeling now?

A sensitive listener would respond in a way that a machine cannot because context is relative. For example, idiom, body language, etc. might let a person know that someone just experienced a bitter separation. Thus, with no explicit mention of pain or betrayal or any of the nasty things that go along with such unfortunate events, even a happy, jovial conversation might turn to “Are you ok?” HOw could a machine ever accomplsh such a task which necessarily involves intuiting no verbal cues that in turn could be faked by another machine, with an endless recursion of suspicion?
Sadly, such OT machines spoil it for everyone interested in genuine heartfelt relationships, such as where the Lord calls us his friends.
 
To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does this be exhibiting an apparently intelligent system running the right sort of program that is, according to Searle, demonstrably not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s thre to be a mind?
The above is incoherent. Are there some parts that were deleted?
 
@Thinking Sapien.
Since the mind is transcendent with respect to the brain in a system that regards a disunity of parts comprisng a tenuous whole bound by no president spirit, then it makes sense that a mind is “manifest” by a certain specific type of or level of programming. A child for example knows little of the difference between a stuffed animal and a real one. The mind of his toy is equivalent to the mind of a real animal. As the child grows into later stages of life, he becomes less and less satisfied with his toy, his animals, and even with certain types of persons for the satisfatcion of his emotional needs. However, the right kind of program in a person is not so much of a condition of a mind, as it is indicative if a person with a mind. The mind therefoe is arbitrary only with respect to aesthetics.
For a task oriented individual, mastery of his task is sufficient to establish his mind.
It is sad for me to contemplate that unless one has a certain level of mastery according to one’s expected role in life, that one is left to think he has no mind.
 
I am so glad you said the bold, When I read some big words I often say, why?
I guess, we have a misunderstanding. I meant, we do not need to explore those words **at this moment. **
The word ‘God’ alone should be able to be reasonably ‘understood’ fairly quickly to have all the fancy words someone can think up to mean 1 ‘being’ was, is, and will be, and that ‘being’ is the ultimate source of everything anyone can think up.
It is never simple. The word “god” or “God” can have many meanings. Even in Christian theology there are (at least) two approaches. One school uses the “positive” method, declaring the alleged attributes of God. The other school uses the “negative” theology, talking about what God is NOT.
Understanding what happened is not necessary to accomplish the goal which is to move the case along.

I think if you asked prosecutors, investigators, and judges why someone kills, they might shrug in the same way you and I might say ‘how could someone do that horrible thing!’.
Very true, but that is not the topic of this thread. To find out “WHY” someone has performed an act is a job for psychiatrists. I am only interested in understanding what a person asserts, not why he asserts it. The judges, persecutors, investigators are only interested in WHAT the accused says, not WHY he says it. It is so easy to misunderstand the topic… that is why we need to declare what we are talking about.

The question is: “does the other party understand the question”, or not? Let’s take a real world example, the entity called Watson, a computer which beat the living “daylight” out of the best human players on Jeopardy. I hope you are aware that the clues on Jeopardy are intentionally misleading and hazy, so to confuse the players. Watson was able to “decipher” the exact meaning of the clues better than the very best humans could. Now it is pretty obvious that the humans “understood” the clues. What about Watson?

It is easy to declare that a computer cannot “understand” the question, but this leads back to the OP. HOW do you know if there is an understanding, or not? HOW do you gauge (or measure) understanding? That is the fascinating question I try to explore.
 
The Chinese Room argument has flaws.
Many flaws. No one tries to assert that the individual components (be they people in the Chinese room, or the processors in a computer, or the neurons in the brain) “understand” the question. It would be straight reductionism, and that is only asserted by those who are ignorant to the extreme. Those people probably never heard of distributed processing, and if they did, they don’t understand it. Not even a simple thing like pleasure or pain can be “reduced” to individual neurons.
 
The above is incoherent. Are there some parts that were deleted?
I omitted the explanation of what the Chinese room experiment is which appeared before the above section. I always mark where text has been omitted with …]. I figure those that are not familiar with it can follow the link you posted. I reread I posted and what was in the book without seeing that there’s anything else omitted from that point on.

Since in this forum when people use the term AI they usually seem to be referring to what is known as “Strong AI” I didn’t bother to define the term “Strong AI.” But for what it’s worth:

"…] the assertion that Machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulated thinking) is called the strong AI hypothesis.

…]Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis – as long as their program works, they don’t care whether you call it simulation of intelligence or real intelligence…]Arguments for and against strong AI are inconclusive. Few mainstream AI researchers believe that anything significant hinges on the outcome of the debate"
 
I omitted the explanation of what the Chinese room experiment is which appeared before the above section. I always mark where text has been omitted with …]. I figure those that are not familiar with it can follow the link you posted. I reread I posted and what was in the book without seeing that there’s anything else omitted from that point on.

Since in this forum when people use the term AI they usually seem to be referring to what is known as “Strong AI” I didn’t bother to define the term “Strong AI.” But for what it’s worth:

"…] the assertion that Machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulated thinking) is called the strong AI hypothesis.

…]Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis – as long as their program works, they don’t care whether you call it simulation of intelligence or real intelligence…]Arguments for and against strong AI are inconclusive. Few mainstream AI researchers believe that anything significant hinges on the outcome of the debate"
Very good summary to show how useless is the navel-gazing of the philosophers when compared to the actual work of the AI researchers. It reminds me of the old joke, which goes:
  • Those who know it, do it.
  • Those who cannot do it, teach it.
  • Those who cannot even teach it, manage it.
  • Those who are unable to even manage it, philosophize about it… 🙂
The basic question is, of course, what is the difference between a “simulation” and the “real McCoy”, if one cannot find out, which is which. 🙂 Using a simple analogy, if one has two armchairs, one is covered by “real” leather, and the other is “naugahide” and there is no method to find out which is which… then who the heck cares?
 
Very good summary to show how useless is the navel-gazing of the philosophers when compared to the actual work of the AI researchers. It reminds me of the old joke, which goes:
  • Those who know it, do it.
  • Those who cannot do it, teach it.
  • Those who cannot even teach it, manage it.
  • Those who are unable to even manage it, philosophize about it… 🙂
The basic question is, of course, what is the difference between a “simulation” and the “real McCoy”, if one cannot find out, which is which. 🙂 Using a simple analogy, if one has two armchairs, one is covered by “real” leather, and the other is “naugahide” and there is no method to find out which is which… then who the heck cares?
The above philosophical treatise is a great example of why philosophy is so important.

Thank you! 🙂
 
Suppose you have a telephone conversation with “someone”. You don’t know if the other party is a human or a sophisticated AI. The question is:

How do you know if the other party UNDERSTOOD your questions and propositions? In other words, how does UNDERSTANDING manifest itself?
I think that besides getting feedback as to what their understand is, in their own words, one should ask how they think that understanding fits in with their goals, if it does, and how it does not, if it doesn’t.

Computers may simulate understanding, but I doubt they can simulate having human goals, and, even if lying about goals as well as understanding, cross link the two lies ad hoc.

peace
steve
 
I think that besides getting feedback as to what their understand is, in their own words, one should ask how they think that understanding fits in with their goals, if it does, and how it does not, if it doesn’t.
A good part of the Turing test. 🙂
Computers may simulate understanding, but I doubt they can simulate having human goals, and, even if lying about goals as well as understanding, cross link the two lies ad hoc.
What is the difference between “real” and “simulated” understanding? How can you tell the difference?

Let’s take another example:
  1. the original version of a medication and
  2. the generic version of it.
They are both made of the same ingredients; therefore they are both equally effective. The pills look and perform identically, even though they are produced by different companies. The question is again: “how do you tell the difference”? and “why on Earth would you care”?
 
Very good summary to show how useless is the navel-gazing of the philosophers when compared to the actual work of the AI researchers.
I’ve got a hunch that the authors of this book have a view that is compatible with your own. There’s a chapter on Philosophy. It’s the very last chapter in the book. If I had to choose one sentence that I think expresses the authors’ view on philosophers it would be the following.

This might not be feasible for large k, but philosophers deal with the theoretical, not the practical.

Don’t be distracted by the algebraic ‘k.’ It’s usage doesn’t seem to have impact on the interpretation of the sentiment expressed above. The authors’ also seem to think that a lot of the disagreements are semantic. I’ve probably quoted more than enough from them so the following is heavily abridged.

Furthermore, [philosophers ]have traditionally posed the question as, “Can machines think?” Unfortunately, this question is ill-defined. To see why consider the following questions:
  • Can machines fly?
  • Can machines swim?
Most people agree with the answer to the first question is uses, airplanes can fly, but the answer to the second is no; boats and submarines do move through the water, but we do not call that swimming. However, neither the questions nor the answers have any impact at all on the working lives of aeronautic and naval engineers or on the users of their products. The answers have very little to do with the design capabilities of airplanes and submarines, and much more to do with the way we have chosen to use words. The word “swim” in English has come to mean “to move lone in the water by movement of body parts,” whereas the word “fly” has no such limitations on the means of locomotion.[1]

[1] - In Russian the equivalent of “swim” does apply to ships.
 
The pills look and perform identically, even though they are produced by different companies. The question is again: “how do you tell the difference”? and “why on Earth would you care”?
It sounds like one would be said to have all the virtues of the other, or to use other language they would be virtually the same. This concept is found all throughout areas of computer science. Not just A.I. One might often encounter references to “virtual machines” which could be a real physical machine or a logical equivalent written in software (emulator). A software developer can just develop the virtual platform without concern for which implementation of a machine will be running it.
 
I omitted the explanation of what the Chinese room experiment is which appeared before the above section. I always mark where text has been omitted with …]. I figure those that are not familiar with it can follow the link you posted. I reread I posted and what was in the book without seeing that there’s anything else omitted from that point on.

Since in this forum when people use the term AI they usually seem to be referring to what is known as “Strong AI” I didn’t bother to define the term “Strong AI.” But for what it’s worth:

"…] the assertion that Machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulated thinking) is called the strong AI hypothesis.

…]Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis – as long as their program works, they don’t care whether you call it simulation of intelligence or real intelligence…]Arguments for and against strong AI are inconclusive. Few mainstream AI researchers believe that anything significant hinges on the outcome of the debate"
Actually, I get that part.

I just didn’t understand that your explication omitted a few words, and perhaps some punctuation:

To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does** [seem]?** to be exhibiting an apparently intelligent system** .] Running -]running/-] the right sort of program that is, according to Searle, [that]** demonstrably [is] not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s** [in there]?** thre to be a mind?

As I add the above is that a correct representation of your post?
 
Actually, I get that part.

I just didn’t understand that your explication omitted a few words, and perhaps some punctuation:
…]
As I add the above is that a correct representation of your post?
It’s quite possible I made a number of mistakes when retyping the paragraphs. I did a word by word comparison on the part you quoted above against what’s in the book. While I acknowledged it is still possible that there exist mistakes within the following this is otherwise a word for word representation of the authors’ words.

To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does** -]seem/-]]?** this -]to be/-] by exhibiting an apparently intelligent system** -].]/-] -]Running/-] running the right sort of program that is, according to Searle, -]that/-]]** demonstrably -]is/-]] not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s**-] [in there]/-]?** there to be a mind?

If you are asking if the representation of the paragraph that you posted (with the sentences in smaller segments as not to be run-ons) is an easier to read but equivalent representation of the authors’ words I would say that it is.
 
It’s quite possible I made a number of mistakes when retyping the paragraphs. I did a word by word comparison on the part you quoted above against what’s in the book. While I acknowledged it is still possible that there exist mistakes within the following this is otherwise a word for word representation of the authors’ words.

To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does** -]seem/-]]?** this -]to be/-] by exhibiting an apparently intelligent system** -].]/-] -]Running/-] running the right sort of program that is, according to Searle, -]that/-]]** demonstrably -]is/-]] not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s**-] [in there]/-]?** there to be a mind?

If you are asking if the representation of the paragraph that you posted (with the sentences in smaller segments as not to be run-ons) is an easier to read but equivalent representation of the authors’ words I would say that it is.
 
Suppose you have a telephone conversation with “someone”. You don’t know if the other party is a human or a sophisticated AI. The question is:

How do you know if the other party UNDERSTOOD your questions and propositions? In other words, how does UNDERSTANDING manifest itself?
I want to emphasize Solmyr, that there are problems with the phraseology/semantics involved.
Aside from that, I would like to know if you are asking,
How does humanity manifest-known inside a relational meeting?
 
A good part of the Turing test. 🙂

What is the difference between “real” and “simulated” understanding? How can you tell the difference?

Let’s take another example:
  1. the original version of a medication and
  2. the generic version of it.
They are both made of the same ingredients; therefore they are both equally effective. The pills look and perform identically, even though they are produced by different companies. The question is again: “how do you tell the difference”? and “why on Earth would you care”?
I should care because a machine has no soul. No feelings. Can’t be hurt.

-However, now thinking about it, a machine might be programmed to recognize uncaring attitudes and respond with calamity. Might be better to err on the side of caution.

But you are right. Given a machine of exceptional speed and prodigious programming (by an understanding programmer or programmers 😉 ) one could not tell the machine from the human, IMHO.

But, jes’ saying, if you want to check the programming, then cross-referencing the computer’s simulated “human goals” with its “understanding”, to show a more human understanding, would seem to be a good way to start.

peace
steve
 
Status
Not open for further replies.
Back
Top