P
PRmerger
Guest
Clever!The Turing test is deficient, as shown by Searle in his famous Chinese Room Argument.
plato.stanford.edu/entries/chinese-room/
youtube.com/watch?v=MWhZ-DpvB0M
Clever!The Turing test is deficient, as shown by Searle in his famous Chinese Room Argument.
plato.stanford.edu/entries/chinese-room/
The Chinese Room argument has flaws.The Turing test is deficient, as shown by Searle in his famous Chinese Room Argument.
plato.stanford.edu/entries/chinese-room/
The Turing test is deficient, as shown by Searle in his famous Chinese Room Argument.
plato.stanford.edu/entries/chinese-room/
"Artifical Intelligence, a Modern Approach" (Second Edition)The Chinese Room argument has flaws.
The above is incoherent. Are there some parts that were deleted?To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does this be exhibiting an apparently intelligent system running the right sort of program that is, according to Searle, demonstrably not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s thre to be a mind?
I guess, we have a misunderstanding. I meant, we do not need to explore those words **at this moment. **I am so glad you said the bold, When I read some big words I often say, why?
It is never simple. The word “god” or “God” can have many meanings. Even in Christian theology there are (at least) two approaches. One school uses the “positive” method, declaring the alleged attributes of God. The other school uses the “negative” theology, talking about what God is NOT.The word ‘God’ alone should be able to be reasonably ‘understood’ fairly quickly to have all the fancy words someone can think up to mean 1 ‘being’ was, is, and will be, and that ‘being’ is the ultimate source of everything anyone can think up.
Very true, but that is not the topic of this thread. To find out “WHY” someone has performed an act is a job for psychiatrists. I am only interested in understanding what a person asserts, not why he asserts it. The judges, persecutors, investigators are only interested in WHAT the accused says, not WHY he says it. It is so easy to misunderstand the topic… that is why we need to declare what we are talking about.Understanding what happened is not necessary to accomplish the goal which is to move the case along.
I think if you asked prosecutors, investigators, and judges why someone kills, they might shrug in the same way you and I might say ‘how could someone do that horrible thing!’.
Many flaws. No one tries to assert that the individual components (be they people in the Chinese room, or the processors in a computer, or the neurons in the brain) “understand” the question. It would be straight reductionism, and that is only asserted by those who are ignorant to the extreme. Those people probably never heard of distributed processing, and if they did, they don’t understand it. Not even a simple thing like pleasure or pain can be “reduced” to individual neurons.The Chinese Room argument has flaws.
I omitted the explanation of what the Chinese room experiment is which appeared before the above section. I always mark where text has been omitted with …]. I figure those that are not familiar with it can follow the link you posted. I reread I posted and what was in the book without seeing that there’s anything else omitted from that point on.The above is incoherent. Are there some parts that were deleted?
Very good summary to show how useless is the navel-gazing of the philosophers when compared to the actual work of the AI researchers. It reminds me of the old joke, which goes:I omitted the explanation of what the Chinese room experiment is which appeared before the above section. I always mark where text has been omitted with …]. I figure those that are not familiar with it can follow the link you posted. I reread I posted and what was in the book without seeing that there’s anything else omitted from that point on.
Since in this forum when people use the term AI they usually seem to be referring to what is known as “Strong AI” I didn’t bother to define the term “Strong AI.” But for what it’s worth:
"…] the assertion that Machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulated thinking) is called the strong AI hypothesis.
…]Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis – as long as their program works, they don’t care whether you call it simulation of intelligence or real intelligence…]Arguments for and against strong AI are inconclusive. Few mainstream AI researchers believe that anything significant hinges on the outcome of the debate"
The above philosophical treatise is a great example of why philosophy is so important.Very good summary to show how useless is the navel-gazing of the philosophers when compared to the actual work of the AI researchers. It reminds me of the old joke, which goes:
The basic question is, of course, what is the difference between a “simulation” and the “real McCoy”, if one cannot find out, which is which.
- Those who know it, do it.
- Those who cannot do it, teach it.
- Those who cannot even teach it, manage it.
- Those who are unable to even manage it, philosophize about it…
Using a simple analogy, if one has two armchairs, one is covered by “real” leather, and the other is “naugahide” and there is no method to find out which is which… then who the heck cares?
I think that besides getting feedback as to what their understand is, in their own words, one should ask how they think that understanding fits in with their goals, if it does, and how it does not, if it doesn’t.Suppose you have a telephone conversation with “someone”. You don’t know if the other party is a human or a sophisticated AI. The question is:
How do you know if the other party UNDERSTOOD your questions and propositions? In other words, how does UNDERSTANDING manifest itself?
A good part of the Turing test.I think that besides getting feedback as to what their understand is, in their own words, one should ask how they think that understanding fits in with their goals, if it does, and how it does not, if it doesn’t.
What is the difference between “real” and “simulated” understanding? How can you tell the difference?Computers may simulate understanding, but I doubt they can simulate having human goals, and, even if lying about goals as well as understanding, cross link the two lies ad hoc.
I’ve got a hunch that the authors of this book have a view that is compatible with your own. There’s a chapter on Philosophy. It’s the very last chapter in the book. If I had to choose one sentence that I think expresses the authors’ view on philosophers it would be the following.Very good summary to show how useless is the navel-gazing of the philosophers when compared to the actual work of the AI researchers.
It sounds like one would be said to have all the virtues of the other, or to use other language they would be virtually the same. This concept is found all throughout areas of computer science. Not just A.I. One might often encounter references to “virtual machines” which could be a real physical machine or a logical equivalent written in software (emulator). A software developer can just develop the virtual platform without concern for which implementation of a machine will be running it.The pills look and perform identically, even though they are produced by different companies. The question is again: “how do you tell the difference”? and “why on Earth would you care”?
Actually, I get that part.I omitted the explanation of what the Chinese room experiment is which appeared before the above section. I always mark where text has been omitted with …]. I figure those that are not familiar with it can follow the link you posted. I reread I posted and what was in the book without seeing that there’s anything else omitted from that point on.
Since in this forum when people use the term AI they usually seem to be referring to what is known as “Strong AI” I didn’t bother to define the term “Strong AI.” But for what it’s worth:
"…] the assertion that Machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulated thinking) is called the strong AI hypothesis.
…]Most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis – as long as their program works, they don’t care whether you call it simulation of intelligence or real intelligence…]Arguments for and against strong AI are inconclusive. Few mainstream AI researchers believe that anything significant hinges on the outcome of the debate"
It’s quite possible I made a number of mistakes when retyping the paragraphs. I did a word by word comparison on the part you quoted above against what’s in the book. While I acknowledged it is still possible that there exist mistakes within the following this is otherwise a word for word representation of the authors’ words.Actually, I get that part.
I just didn’t understand that your explication omitted a few words, and perhaps some punctuation:
…]
As I add the above is that a correct representation of your post?
It’s quite possible I made a number of mistakes when retyping the paragraphs. I did a word by word comparison on the part you quoted above against what’s in the book. While I acknowledged it is still possible that there exist mistakes within the following this is otherwise a word for word representation of the authors’ words.
To reiterate, the aim of the Chinese argument is to refute strong AI - the claim that running the right sort of program necessarily results in a mind. It does** -]seem/-]]?** this -]to be/-] by exhibiting an apparently intelligent system** -].]/-] -]Running/-] running the right sort of program that is, according to Searle, -]that/-]]** demonstrably -]is/-]] not a mind. Searle appeals to intuition, not proof, for this part: just look at the room; what’s**-] [in there]/-]?** there to be a mind?
If you are asking if the representation of the paragraph that you posted (with the sentences in smaller segments as not to be run-ons) is an easier to read but equivalent representation of the authors’ words I would say that it is.
I want to emphasize Solmyr, that there are problems with the phraseology/semantics involved.Suppose you have a telephone conversation with “someone”. You don’t know if the other party is a human or a sophisticated AI. The question is:
How do you know if the other party UNDERSTOOD your questions and propositions? In other words, how does UNDERSTANDING manifest itself?
I should care because a machine has no soul. No feelings. Can’t be hurt.A good part of the Turing test.
What is the difference between “real” and “simulated” understanding? How can you tell the difference?
Let’s take another example:
They are both made of the same ingredients; therefore they are both equally effective. The pills look and perform identically, even though they are produced by different companies. The question is again: “how do you tell the difference”? and “why on Earth would you care”?
- the original version of a medication and
- the generic version of it.