The human-like AI is impossible. Is my answer sufficient?

  • Thread starter Thread starter philosopher4hire
  • Start date Start date
Status
Not open for further replies.
P

philosopher4hire

Guest
I think the feasibility (or not) of creating an artificial thinking being (an artificial human-like intelligence) is a question important in the discussion between (catholic) Christians and the majority of the modern ‘science-believers’ – materialists, in fact. If it is feasible, than the materialists are right, that the matter is everything. But if not, it would put them in an uncomfortable position. Many of them still think that the human-like (or even better) AI is inevitable and “just around the corner”.

Will you be so kind as to check if my answer to this question is understandable and convincing? It is 4 pages long (ca. 20 min of reading). You can find it at:


And here is an excerpt to let you know what is it like:

Let’s start with the basics that are fundamental here. What is a computer and how does it work. As an example, we will use a toy for 4-year-olds. It is a cuboid with a (partly) transparent casing. It has ‘drawers’ on the sides and a hole for balls on the top. Depending on which drawers are pulled out and which are not, the ball (entered at the top) travels inside the toy in various ways, going out through one of the several holes located at the bottom. For a 4-year-old it’s a great fun – watching changes in the course of the ball depending on the setting of the drawers (switches). For us, an ideal example on how the processor (computer) works. That is, in fact, how every CPU works. The processor is our cuboid, the balls are electrical impulses 'running into’ it through one pins, and leaving it through others. Quite like our balls – thrown in through one hole to fall out through another. The transistors which the processor is built of, serve as drawers (switches) that can be in or out (i.e., switched to different states), in order to change the course of the electrical impulse (our ball) inside the processor.

So the processor (as to the principle of operation) is nothing more but a simple toy for 4-year-olds. It is just that we throw in not one ball at a time, but several dozens and we repeat this action billions of times per second. And we have not four or six drawers but a few billions. Does anyone sane really believe, that if we put billions of balls into a plastic cuboid with billions of drawers, then at some moment in time this cuboid?, these balls?, one plus the other? or perhaps the mere movement of these balls will become consciousness? And it will want to watch the sunset or talk about the Shakespeare’s poetry? If so, then self-consciousness should be expected from the planet Earth or the oceans.
 
I must admit from the beginning that I did not follow the link and read the entire argument. I felt that it might be redundant when I saw:
  • Oversimplification of how a CPU actually works
  • Appeal to Ridicule
  • False Equivalence
All contained in the short sample.

To be fair, I am not convinced that human-level AI is feasible in the near future, or even possible. But I don’t think this argument is the clincher.
 
Just a few observations. Intelligence is simply a higher level of information processing. The information processing medium is irrelevant. It can be organic, or not organic - the performance of manipulating information is the only thing that matters.

As a practical example, consider Watson, IBM’s machine, which beat the best human participants in Jeopardy, “hands down”. It was successfully processing information on a very high level. The clues on Jeopardy are intentionally misleading. The player has to see “though” the obfuscation to understand the actual clue. Watson did it much better than the truly brilliant human players could.

These days Watson “graduated” from playing games, and it is an incredibly successful medical tool, a diagnostician. It can help hundreds and thousands physicians simultaneously. You may say that it is just an “idiot savant”, it can perform games and/or medical diagnosis at some previously “impossible” level, but that is “all”. So what? It can learn to do other information processing tasks. And when I say: “learn”, I refer to AlphaZero, “who” learned to play chess in a few hours, without any special help… simply by playing against itself.

So we can be sure that information processing can and does happen in non-carbon based systems, which we call “artificial” intelligence. The word “artificial” indicates some “lower level” ability, as compared to the “real” intelligence. And that condescension is uncalled for.
 
While all this is true, it isn’t really where the problem with AI lies. The problem with AI - and any purely mechanistic system, would be that it lacks true intentionality, and only has derived intentionality.
 
While all this is true, it isn’t really where the problem with AI lies. The problem with AI - and any purely mechanistic system, would be that it lacks true intentionality, and only has derived intentionality.
What is “true” intentionality? (As opposed to emulated “intentionality”) This is a subset of the highly generic philosophical question: “Can you find out the difference between the ‘real McCoy’ and and a very good approximation / emulation of it?” If there is no way to differentiate between the “original” and the “copy”, then the differentiation between them is meaningless.

The only way to discern the “intentions” is by observation of the actions. You cannot “dig” into the other being’s decision making process, and see if it is “natural” or “artificial”. Just like the Turing test. And the solution is the duck principle: “If it looks like a duck, quacks like a duck, walks like a duck, tastes like a duck, then the only rational conclusion is that it IS a duck.”
 
While all this is true, it isn’t really where the problem with AI lies. The problem with AI - and any purely mechanistic system, would be that it lacks true intentionality, and only has derived intentionality.
To expand upon Economist’s post, how do I know that your actions are due to true intentionality?

In fact, how can I even be certain that my actions are due to true intentionality?
 
Last edited:
Will you be so kind as to check if my answer to this question is understandable and convincing?
Like others before me I must admit that I didn’t read your entire answer, because the excerpt was enough to demonstrate that you haven’t thought the problem through adequately enough.

For example, a quantum computer would operate under a completely different process than the linear/deterministic one that you’ve outlined in the excerpt, so your answer simply wouldn’t apply.

Thus there was no need to read any further.
 
Just like the Turing test.
Before I get to my main point, I’d like to point out that the Turing Test is a terrible measure for artificial
intelligence - John Searle’s Chinese Room thought experiment illustrates why.
The only way to discern the “intentions” is by observation of the actions.
That’s not what I mean when I say intentionality. Intentionality refers to the ‘aboutness’ of thoughts. Our thoughts are physical events in our brains, but they are about something above and beyond that physical parts that make them up. A common example are words. Take the word ‘cat’, for instance. There is nothing inherent to the physical shapes of the letters that has anything to do with the feline animal they refer to - the meaning - the intentionality - is derived from us when we give the word meaning. Or look at an abacus. We can use it do to simple math, but ultimately its just beads on a wooden dowel. The meaning is derived from us. Advanced computers like Watson may appear to have intentionality, but that doesn’t mean they do. The reason they can’t is because naturalism forbids it - according to naturalism there is nothing beyond the physical facts. There is no meaning in a computer beyond a series of open and closed circuits. There is not meaning to our thoughts beyond a configuration of neurons.
For example, a quantum computer would operate under a completely different process than the linear/deterministic one that you’ve outlined in the excerpt, so your answer simply wouldn’t apply
Quantum events aren’t really much different, as they equally lack the intentionality that I explained above. There’s no meaning, no aboutness, to a quantum event.
 
Last edited:
Quantum events aren’t really much different, as they equally lack the intentionality that I explained above. There’s no meaning, no aboutness, to a quantum event.
But when you consider the “concept” of a cat how do we know that that concept isn’t simply a configuration of neurons?

Or are we to assume that what you’re really referring to is consciousness?
 
Last edited:
Take the word ‘cat’, for instance. There is nothing inherent to the physical shapes of the letters that has anything to do with the feline animal they refer to - the meaning - the intentionality - is derived from us when we give the word meaning.
When you form the concept of a cat, or an argument about the feasibility of a conscious AI, there are a vast number of other corollary concepts that are embedded within the overarching concept. So your concepts are actually an amalgamation of concepts. You understand intuitively the relationship between the concept of a cat, and the concept of the whole, including yourself.

So the question is, is it possible for an AI to do that…to understand the relationship between one thing, and everything else. And perhaps most importantly, can it understand the concept of itself. Or understand the concept of anything at all for that matter.

To answer that question we should ask ourselves how it is that we’re able to understand concepts.

How is it that we understand concepts?
 
Last edited:
But when you consider the “ concept ” of a cat how do we know that that concept isn’t simply a configuration of neurons?

Or are we to assume that what you’re really referring to is consciousness?
No, I’m referring to intentionality, which is defined as “the quality of mental states (e.g., thoughts, beliefs, desires, hopes) that consists in their being directed toward some object or state of affairs.” It’s closely related to consciousness, but not one and the same.
Saying that the concept of cat is nothing more than a configuration of neurons is Eliminative Materialism, which doesn’t explain intentionality, it explains away intentionality. Its the same as saying that our thoughts aren’t ‘about’ anything. That’s because a configuration of neurons, no matter how complex, has no meaning beyond it’s physical facts. It has no propositional content. But to say that our thoughts have no meaning, - have no content - is absurd.
When you form the concept of a cat, or an argument about the feasibility of a conscious AI, there are a vast number of other corollary concepts that are embedded within the overarching concept. So your concepts are actually an amalgamation of concepts. You understand intuitively the relationship between the concept of a cat, and the concept of the whole, including yourself.
But the problem doesn’t lie in things like ‘concepts’. The problem lies in how a system made up of strictly physical objects - physical objects that lack ‘aboutness’ - can ever exhibit true intentionality. AI like Watson may be very useful, self-learning algorithms may be useful, but the system itself has no inherent meaning until it is viewed by someone that already has intentionality - just like how an abacus has no meaningful calculations until utilized by a person for that purpose.

Computers nowadays may be able to create a convincing simulation of intelligence - perhaps one good enough to fool a human. That does not mean that it has anything more than derived intentionality.
 
Last edited:
Computers nowadays may be able to create a convincing simulation of intelligence - perhaps one good enough to fool a human.
Until there is no way to separate “real” anything from “simulated” anything, the question of differentiation between them is meaningless.

How do you know that someone is “truly” sad, or merely simulating sadness?
 
Well, in a sense you’re right. It could definitely be possible to create a simulation so convincing that people would be fooled. Heck, how do we even know that the people we meet day-to-day are actually experiencing the emotions and having the thoughts we think they are? You only can infer based upon the fact that you as an individual experience qualia and intentionality. So then the question becomes, if we can experience this, why not computers?
The answer is because a purely naturalistic system cannot produce intentionality - and that’s all a computer is, a naturalistic system. Ultimately nothing more than a physical system of open and closed switches. Naturalism positively excludes intentionality. There is no final causality, no goal directedness, no teleology. Appealing to quantum mechanics won’t help either, because quantum physics equally lacks intentionality. Ultimately, you can either say that naturalism is true (and there’s no such thing as intentionality) or say that we do exhibit intentionality, in which case naturalism is false.

You can know the difference between a real intentionality and a ‘simulated’ one by the fact that any physical, naturalistic system simply cannot, by definition, exhibit true intentionality.
 
Last edited:
Okay, but if we can exhibit true intentionality, and as far as we can prove everything about our minds is mediated by physical neural connections, why is it a priori impossible that a different kind of physical system could come to possess intentionality? Basically, if our intentionality disproves pure naturalism, then you can’t insist that a computer is only naturalistic either. You have already discarded the premise of naturalism.

Searle’s Chinese room has its own holes, by the way. Because there’s a human in the setup, we assume there’s no real understanding of Chinese present because the human doesn’t understand Chinese. But the human isn’t the system under study, it’s just a processing unit. The question is whether the room as a whole can be said to understand Chinese, just as a person might even though no individual part of his brain understands Chinese by itself.
 
Last edited:
I always took the Chinese Room to be illustrating how the mere appearance of intentionality or intelligence does not necessarily mean that there is any. In the experiment, the ‘intelligence’ of the room is derived - either from the person receiving the written pages, or from the authors of the books inside the room. Either way the room itself (the person trapped in the room and the physical structure) have no true understanding of what is being written, just like how even if a computer program could simulate intelligence so well that it could fool a person, doesn’t mean the computer understands its output. Its intelligence comes from the programmers, and its meaning comes from the person interpreting its output.
To say that the room as a whole can understand something is, I think, to anthropomorphize the room too much. Surely you’d agree that even if the person, books, and room were said to be a whole, then they are a whole in a different way than the brain is.
and as far as we can prove everything about our minds is mediated by physical neural connections
There’s the rub though - not everyone is convinced that the mind is merely neural connections. The fact that our minds exhibit intentional and physical matter, under a naturalistic definition, does not, is strong reason to doubt that our minds are merely neural connections.
You do however raise an interesting point. If our minds are at least partially tied to our brains, who’s to say that a computer couldn’t eventually do the same? I think it depends. Under Cartesian dualism, where the ‘intentionality’ is applied to the material from some source outside of nature, the computer would need this ‘outside source’ to give it ‘soul’ so to speak. Under Hyleomorphic dualism, where intentionality is a result of matter and form, then this would merely be a human mind.
 
Last edited:
The answer is because a purely naturalistic system cannot produce intentionality - and that’s all a computer is, a naturalistic system.
This proposition surely could use some evidence. Can you even define what “intentionality” IS, and how does it manifest itself? And how does it happen?
Either way the room itself (the person trapped in the room and the physical structure) have no true understanding of what is being written, just like how even if a computer program could simulate intelligence so well that it could fool a person, doesn’t mean the computer understands its output. Its intelligence comes from the programmers, and its meaning comes from the person interpreting its output.
The same applies to humans. How do you know if your conversation partner has “true” understanding of the subject under deliberation, or merely “emulates” understanding.

This is the crucial question and the answer is really simple, just like the duck principle. Keep on conducting the conversation, put in “twists and turns”, try to make it more complicated… set up “traps”, try to make it as confusing as you can… and as long as your partner “looks like as if she would understand”… the only rational conclusion is that she DOES understand.

Of course this method will exclude many “real” humans, whose understanding is lower, who might not really understand your subterfuge.
There’s the rub though - not everyone is convinced that the mind is merely neural connections. The fact that our minds exhibit intentional and physical matter, under a naturalistic definition, does not, is strong reason to doubt that our minds are merely neural connections.
Hah! Present a different hypothesis, try to support it with experiments - perform the much-maligned scientific method, and at the end we can draw our own conclusions. To use another frequent principle: “the proof of the pudding is that it is edible.” Without a different hypothesis all you have are empty propositions.
 
The same applies to humans. How do you know if your conversation partner has “true” understanding of the subject under deliberation, or merely “emulates” understanding.

This is the crucial question and the answer is really simple, just like the duck principle. Keep on conducting the conversation, put in “twists and turns”, try to make it more complicated… set up “traps”, try to make it as confusing as you can… and as long as your partner “looks like as if she would understand”… the only rational conclusion is that she DOES understand.

Of course this method will exclude many “real” humans, whose understanding is lower, who might not really understand your subterfuge.
The Chinese Room shows that it’s possible for a convincing simulation to appear to us as having understanding, while in reality not having understanding. Your duck principle isn’t a simple answer, it’s a bad one, since it would be very unreliable. It might suggest that there is no understanding when there is, and it will suggest that there is understanding where there isn’t. The whole point of the experiment is to show that even if something looks like a duck and acts like a duck, that doesn’t necessarily mean it’s a duck.
This proposition surely could use some evidence. Can you even define what “intentionality” IS, and how does it manifest itself? And how does it happen?
I did. Intentionality is the ‘aboutness’ of thoughts. It’s “the quality of mental states (e.g., thoughts, beliefs, desires, hopes) that consists in their being directed toward some object or state of affairs.”
Again, look at the word ‘cat’. There is nothing in the shapes of those letters that actually suggest a feline animal. Taken on their own, they have no more meaning that ‘dog’ or ‘xjf’ or ‘xiofd’. Yet, the word has meaning, a meaning that is above and beyond the shape of the letters, that ink used to write it - i.e. beyond the physical facts. Similarly, our thoughts involve physical events in our brains - the firing of neurons. But the neurons themselves have no meaning - just like the nonsense words above, or a random atom, has no meaning beyond itself. Yet our thoughts do have meaning - they are about things - and this aboutness can not be determined my measuring the mere physical facts of the system. Now, it would take quite a lot to discuss exactly where intentionality comes from - different philosophies of mind have different explanations. I’m partial to hyleomorphic dualism, so I would say that final causality is an inherent part of an objects matter and form.
Hah! Present a different hypothesis, try to support it with experiments - perform the much-maligned scientific method, and at the end we can draw our own conclusions.
You’re implying that the scientific method is the only way to have grounds for belief. I don’t want this thread to devolve to arguing scientism, but suffice to say that I view it as question begging, so I don’t consider it a good objection.
 
Last edited:
Present AI is animal-like: it can figure out how to accomplish a task someone else gives it, but it can’t create an objective for itself. It is limited by its programming/instincts. Humans can create goals for themselves, as an exercise of free will, an aspect of the soul.

That said, the dogmatic teaching about the human soul is that it animates the body, is uniquely associated with its body, with which it will be united for eternity at the Resurrection of the Dead, and consists of intellect and will. The means by which the soul interacts with matter is not dogmatically defined beyond this, nor does God refuse to create a soul for humans conceived by sinful means. Considering these facts, as well as the prophecy that the False Prophet will give life to the Image of the Beast, I will not presume how God will react to an attempt to create a human-like AI, whether He would create a spirit for it or not. I do, however, condemn such an act as a sin akin to the Tower of Babel.
 
Last edited:
We still don’t know what a thought is?, where does come from?, how we are able to process thought?, etc. Until then we cannot build a machine that is intelligent.
 
There is more then one way to skin a cat. Honestly if we can make a program aware of itself then we already have.
 
Status
Not open for further replies.
Back
Top