Robot At SXSW Says She Wants To Destroy Humans | The Pulse | CNBC

  • Thread starter Thread starter IWantGod
  • Start date Start date
Status
Not open for further replies.
These AI’s operate faster then we do and self learn and yet they are not self-conscious.
How do you know if your conversation partner (who could be a human or an AI) is self-conscious or not? Describe the epistemological method to make this distinction. Because I have no such method. Therefore I paraphrase what Forrest Gump said: “self-conscious is as self-conscious does”. Besides, what IS self-consciousness? The ability to distinguish “me” from “you”. How does it manifest itself? By using the personal pronouns “I” and “you” correctly. No big deal. 🤷

We will have to see if machines are capable of creating metaphors. As it stands now there doesn’t appear to be any way they could, and “someday you’ll see” isn’t an argument of any substance.
There are many people who are unable to write poems or create metaphors. What about them? Besides, what IS a metaphor? Something that resembles certain features of the original. In other words, a “crude replica”. What is so special about it? By the way, the frequently used metaphors lose their “appeal”. If too make people start to use the “sunrise” (for example as a metaphor for something new) it will become a boring commonplace and not a “poetic tour de force”.
Performing material processes faster than a human brain can, as Watson does, demonstrates nothing more than a forklifting picking up more weight than a human body can.
Well, it is not that simple. The clues in Jeopardy are intentionally misleading and ambiguous. And yet, Watson was better to see through the misdirection than the BEST human champions could, and by a substantial margin. And the human champions are very awesome. Such an ability is much more significant than lifting a heavier weight. The difference is not merely quantitative, but qualitative.
Furthermore, imitation that can’t be told apart from the real thing is certainly still imitation by definition. In such cases the discernment of the observer is just as much a factor as the capability of the observed subject; my inability to perceive the difference between the real and the fake does not change the nature of the fake, only my response towards it.
But we are not talking about just one observer. Any good sleight-of-hand magician can fool an average audience. We talk about the case when there is no way to find out which is the “real thing” and which is the “imitation”. An example: Today we already have 3-D copy machines, albeit just simple ones. Let’s consider a perfect copy machine, which scans the original atom-by-atom and places an identical atom into the corresponding position. Atoms have no “id” tags on them, each carbon atom (for example) is interchangeable with another one.

So let’s place the Mona Lisa into this copy machine, and press the start button. In the output tray we shall get a perfect replica of the original. You could say that the “original” was touched by da Vinci’s hand, but the replica was not… BUT there is no way to find out which one is which? So the question of “which one is the original” cannot be answered (in principle!), and as such it is a meaningless question.

The same is applicable to any imitation.
An A.I. that appears human to 99% of human observers might require a change in how we act towards it, if only for the sake of playing it safe, our perception of this A.I. and our behavior towards is not what what would define it as alive, any more than your perception of me makes me what I am.
But the point is more complicated. What is “alive” and what is inanimate? An AI might not be biologically “alive”, but it can be intellectually alive. Suppose that a very sophisticated computer exhibits all the signs of thinking, it can conduct a long conversation about a variety of subjects. It can even create poems and/or metaphors. Or it can create new chess problems or discover new laws of nature. However, its creators “forgot” to place a “battery backup” power source, and if someone would unplug it from the power outlet, it would be permanently deactivated… or should we call it “dead”?
 
…An AI might not be biologically “alive”, but it can be intellectually alive. Suppose that a very sophisticated computer exhibits all the signs of thinking, it can conduct a long conversation about a variety of subjects. It can even create poems and/or metaphors. Or it can create new chess problems or discover new laws of nature. However, its creators “forgot” to place a “battery backup” power source, and if someone would unplug it from the power outlet, it would be permanently deactivated… or should we call it “dead”?
An AI cannot be intellectually alive because it lacks insight, intuition, emotion, discernment, self-control, moral responsibility, social activity and personal experience.

“The heart has its reasons that reason does not know.” - Pascal

Life consists of far more than logic - and we are not merely biological machines…
 
Robot At SXSW Says She Wants To Destroy Humans | The Pulse | CNBC

Robotics is finally reaching the mainstream and androids - humanlike robots - are everywhere at SXSW Experts believe humanlike robots are the key to smoothing communication between humans and computers, and realizing a dream of compassionate robots that help invent the future of life.

youtube.com/watch?v=W0_DPi0PmF0

Humanoid Robot Tells Jokes on GMB! | Good Morning Britain

youtube.com/watch?v=kWlL4KjIP4M

Should we give genuinely sentient robots human rights?
Ask me again when we have some sapient robots.
 
There are many people who are unable to write poems or create metaphors. What about them? Besides, what IS a metaphor? Something that resembles certain features of the original. In other words, a “crude replica”. What is so special about it? By the way, the frequently used metaphors lose their “appeal”. If too make people start to use the “sunrise” (for example as a metaphor for something new) it will become a boring commonplace and not a “poetic tour de force”.
First off, the appeal of a metaphor is not the point at all. It is the combining of disparate concepts, abstracted from real experience or ideas, and combining them in a novel way that creates a new idea that can be communicated and understood by other intelligences that indicates an immaterial mind.

As for those that can’t compose metaphors, the real issue at hand is this combining of disparate, abstracted ideas to form new ones. Metaphors and poetry are simply the most readily available examples of such thought. Allegories of course fit the mold, as do concepts like a “time loop”, in which time (which is experienced linearly) and a circle (which includes the notion of infinite repetition and calling back on itself) are combined to make a new idea that is actually disparate from both component ideas. Machines have not indicated the ability to do such thinking yet, and I don’t believe such thinking falls within the scope of material processes.
Well, it is not that simple. The clues in Jeopardy are intentionally misleading and ambiguous. And yet, Watson was better to see through the misdirection than the BEST human champions could, and by a substantial margin. And the human champions are very awesome. Such an ability is much more significant than lifting a heavier weight. The difference is not merely quantitative, but qualitative.
The questions in Jeopardy are difficult, but they involve the simple collation and recall of data, not immaterial thought. The human brain is an incredible piece of biological machinery, but it remains a biological machine. For what it’s worth, a forklift’s function is simple, but its design is incredibly complex, the result of thousands and thousands of years of cumulative design and refinement in many different fields of mathematics and engineering. The fact that it performs a “crude” physical operation doesn’t mean that it is far less sophisticated than the design of Watson.
But we are not talking about just one observer. Any good sleight-of-hand magician can fool an average audience. We talk about the case when there is no way to find out which is the “real thing” and which is the “imitation”. An example: Today we already have 3-D copy machines, albeit just simple ones. Let’s consider a perfect copy machine, which scans the original atom-by-atom and places an identical atom into the corresponding position. Atoms have no “id” tags on them, each carbon atom (for example) is interchangeable with another one.
The fact that the analysis of any observer is required to the define the subject renders such a test useless in determining the real nature of the subject. A thing is what it is regardless of an observer’s judgement of it; this is basic logic, as the observer must be observing some real thing in order to make a judgement, therefore the judgement does not make the thing.
But the point is more complicated. What is “alive” and what is inanimate? An AI might not be biologically “alive”, but it can be intellectually alive. Suppose that a very sophisticated computer exhibits all the signs of thinking, it can conduct a long conversation about a variety of subjects. It can even create poems and/or metaphors. Or it can create new chess problems or discover new laws of nature. However, its creators “forgot” to place a “battery backup” power source, and if someone would unplug it from the power outlet, it would be permanently deactivated… or should we call it “dead”?
If such a machine were to exist then I would have no problem with the notion that it has an immaterial soul. If it was permanently deactivated then the immaterial soul would be assumed to live on in some state, being immaterial and not subject to breaking down along with the material processes of its body, but this would still fit the state we call “dead”.

This is getting ahead of ourselves, however, and this again falls into the “someday you’ll see” mode of argument. If you’re proposing such things for the sake of a moral question that is one thing, but it doesn’t make for an argument about the actual possibility of such a hypothetical being.
 
First off, the appeal of a metaphor is not the point at all.
Then you should not have brought it up in the first place. 🙂
It is the combining of disparate concepts, abstracted from real experience or ideas, and combining them in a novel way that creates a new idea that can be communicated and understood by other intelligences that indicates an immaterial mind.
OK. So your actual point is the creativity. Observing the actual reality, grasping the important factors, and then generalize from them. With this process one can reach new ideas. Is this a fair assessment of your analysis? And then you assert that a “machine” cannot perform this act?

Just want to make sure that I understand your position. I would hate to spend a long time on an answer, and then get the “well, this is not the point at all”. Life is too short to spend on misunderstandings. Once you clarify this, I will be in a better position to provide an argument.
 
Then you should not have brought it up in the first place. 🙂
I didn’t. I brought up the ability to create metaphors from abstracted ideas; you brought up the appeal of such metaphors.
OK. So your actual point is the creativity. Observing the actual reality, grasping the important factors, and then generalize from them. With this process one can reach new ideas. Is this a fair assessment of your analysis? And then you assert that a “machine” cannot perform this act?
Just want to make sure that I understand your position. I would hate to spend a long time on an answer, and then get the “well, this is not the point at all”. Life is too short to spend on misunderstandings. Once you clarify this, I will be in a better position to provide an argument.
I think you’re on the right track in understanding what I’m saying, but there might be some aspects that need to be explained further. I wouldn’t say that generalizing is necessarily indicative of immaterial thought, as most generalizations are based on material experience, for example the generalization that cats have whiskers. This is not the same type of thought process as saying “my wife was a cat with a mouse”, or “he was a lion in the face of danger”. I also wouldn’t use the term “creativity” to describe it, since that term is vague and could easily apply to creativity regarding demonstrably material processes, like tool-making or painting pictures.

More to the point, when we experience an object we abstract a lot of qualities from it. When we see a circle, for example, we abstract roundness as a general quality in addition to experiencing the roundness of this particular circle. Applying this general quality of roundness to other shapes is a mental feat, but it doesn’t necessarily indicate that there is anything immaterial going on as we could just be matching different objects from our experience or knowledge that have a curved shape.

Associating roundness with time to form the idea of a time loop, however, at least points to an immaterial mind because the material experience of time and circles have no common ground with which to make a material connection; we aren’t associating the “curve” of a circle with the “curve” we experience in time, as time does not have a physical shape that we can experience, and we do not experience time loops at any rate. Yet when I speak about time loops you can understand what I’m talking about because your mind can abstract “roundness” from not only the physical experience of a circle or other curved shape, but from all material association despite the fact that “roundness” is an entirely material experience. Your mind can then apply this abstracted concept to things that aren’t in the genera of shape, or even the genera of physical experience, such as applying the concept to the flow of time.

There does not appear to be a material, physical medium for these concepts to interact and form new ideas, and yet we do it casually every day. Computers, and other animals for that matter, collate data and can form generalizations and problem-solve based on this data, and some can even communicate, yet none seem to be capable of this casual crossing-over of disparate concepts to form new ideas. If a computer were able to do such a thing then it would be a starting point for reevaluating the concept of an immaterial mind, or for reevaluating the judgement that computers don’t possess immaterial minds (both avenues would have to be explored), but the Turing Test doesn’t present this type of challenge, and so far computers show no indication of performing this peculiar mental process that humans take for granted.

Peace and God bless!
 
I didn’t. I brought up the ability to create metaphors from abstracted ideas; you brought up the appeal of such metaphors.
Yes, you did. This is exactly what you said in your previous post:
We will have to see if machines are capable of creating metaphors. As it stands now there doesn’t appear to be any way they could, and “someday you’ll see” isn’t an argument of any substance.
Your only argument was that “it does not appear to be any way that they could”. Now you changed your mind. Which is fine, but don’t deny what you previously said.
I think you’re on the right track in understanding what I’m saying, but there might be some aspects that need to be explained further. I wouldn’t say that generalizing is necessarily indicative of immaterial thought, as most generalizations are based on material experience, for example the generalization that cats have whiskers. This is not the same type of thought process as saying “my wife was a cat with a mouse”, or “he was a lion in the face of danger”. I also wouldn’t use the term “creativity” to describe it, since that term is vague and could easily apply to creativity regarding demonstrably material processes, like tool-making or painting pictures.

More to the point, when we experience an object we abstract a lot of qualities from it. When we see a circle, for example, we abstract roundness as a general quality in addition to experiencing the roundness of this particular circle. Applying this general quality of roundness to other shapes is a mental feat, but it doesn’t necessarily indicate that there is anything immaterial going on as we could just be matching different objects from our experience or knowledge that have a curved shape.

Associating roundness with time to form the idea of a time loop, however, at least points to an immaterial mind because the material experience of time and circles have no common ground with which to make a material connection; we aren’t associating the “curve” of a circle with the “curve” we experience in time, as time does not have a physical shape that we can experience, and we do not experience time loops at any rate. Yet when I speak about time loops you can understand what I’m talking about because your mind can abstract “roundness” from not only the physical experience of a circle or other curved shape, but from all material association despite the fact that “roundness” is an entirely material experience. Your mind can then apply this abstracted concept to things that aren’t in the genera of shape, or even the genera of physical experience, such as applying the concept to the flow of time.

There does not appear to be a material, physical medium for these concepts to interact and form new ideas, and yet we do it casually every day. Computers, and other animals for that matter, collate data and can form generalizations and problem-solve based on this data, and some can even communicate, yet none seem to be capable of this casual crossing-over of disparate concepts to form new ideas. If a computer were able to do such a thing then it would be a starting point for reevaluating the concept of an immaterial mind, or for reevaluating the judgement that computers don’t possess immaterial minds (both avenues would have to be explored), but the Turing Test doesn’t present this type of challenge, and so far computers show no indication of performing this peculiar mental process that humans take for granted.
You sure demand a lot of “stuff” from those poor AI’s. By the way your “time-loop” example is totally inappropriate, since time does NOT work in a loop. Your new demands seem to limit “intelligence” to science fiction writers, poets or inventors. The majority of people would not pass that test, and not only the toddlers or infants, but the majority of the adults either.

Of course the Turing test was not designed to discover geniuses, poets, geniuses or inventors. If the test is successful, it simply means that person who administers the test is unable to decide if the other party is a human or an AI. And you can be as “tricky” as you want to. The current AI’s are able to fool the average questioners.

But the point is that all thinking is “information processing”. The medium where this processing takes place can be organic (brain) or some artificial environment. You might believe that the “silicon based” environment is less capable than the organic. So far your only argument was that it does nor “seem likely”. And that is not much of an argument, is it?

So what is your current objection to AI’s?
 
Yes, you did. This is exactly what you said in your previous post:

Your only argument was that “it does not appear to be any way that they could”. Now you changed your mind. Which is fine, but don’t deny what you previously said.
What I said has nothing to do with the appeal of a given metaphor, merely the ability to create metaphors. I maintain that A.I.s do not exhibit this ability, nor do I believe they ever will be able to, and I haven’t changed my stance on this at all.
You sure demand a lot of “stuff” from those poor AI’s. By the way your “time-loop” example is totally inappropriate, since time does NOT work in a loop. Your new demands seem to limit “intelligence” to science fiction writers, poets or inventors. The majority of people would not pass that test, and not only the toddlers or infants, but the majority of the adults either.
I merely demand that if an A.I. is going to be used to disprove the notion of an immaterial soul then it should be capable of performing tasks that are indicative of an immaterial soul. If an A.I. can’t rise to the challenge then it can’t exactly disprove anything about the human mind now, can it?

As for time not working in a loop, that is precisely my point. We can conceive of an idea that is utterly disparate from our experience of reality, and yet these ideas are communicable and intelligible. If the mind were material there should be no basis for the creation of ideas that are utterly removed from real experience.
Of course the Turing test was not designed to discover geniuses, poets, geniuses or inventors. If the test is successful, it simply means that person who administers the test is unable to decide if the other party is a human or an AI. And you can be as “tricky” as you want to. The current AI’s are able to fool the average questioners.
And this does nothing to disprove the notion of an immaterial soul, which is my point. The Turing Test does not establish what you claimed it establishes.
But the point is that all thinking is “information processing”.
You are merely reducing thinking to “information processing” without answering the point that I’m making, namely that the human mind apparently does more than simply process information when it combines disparate, abstracted concepts to form new ideas. This is something we do naturally and readily, yet no other material process, including other forms of organic life, appear capable of this. You can claim that it is merely information processing, but it is not apparent how this is so.
The medium where this processing takes place can be organic (brain) or some artificial environment. You might believe that the “silicon based” environment is less capable than the organic. So far your only argument was that it does nor “seem likely”. And that is not much of an argument, is it?
So what is your current objection to AI’s?
My argument is that A.I.s demonstrably do not perform the tasks that are associated with an immaterial soul. This is indisputable, as there are no A.I.s that are able to abstract information from experience and combine disparate abstracted concepts to create new ideas. This ability not only doesn’t arise from a silicon environment, but it does not seem reasonable that it arises from an organic environment either. It is not a question of the material doing the processing, it is the fact that material processes are not sufficient to explain this observable phenomena. Until a material explanation is sufficient to explain such an apparently immaterial process, there is no reason to abandon the idea of an immaterial soul.

I am not assuming the existence of an immaterial soul, I’m saying that an immaterial soul is the best current explanation for the apparently immaterial processes of the human mind. The materialist position, on the other hand, requires ignoring available evidence in order to maintain a dogmatic attachment to the proposition that all observed processes have a material explanation.

Peace and God bless!
 
What I said has nothing to do with the appeal of a given metaphor, merely the ability to create metaphors. I maintain that A.I.s do not exhibit this ability, nor do I believe they ever will be able to, and I haven’t changed my stance on this at all.

I merely demand that if an A.I. is going to be used to disprove the notion of an immaterial soul then it should be capable of performing tasks that are indicative of an immaterial soul. If an A.I. can’t rise to the challenge then it can’t exactly disprove anything about the human mind now, can it?

As for time not working in a loop, that is precisely my point. We can conceive of an idea that is utterly disparate from our experience of reality, and yet these ideas are communicable and intelligible. If the mind were material there should be no basis for the creation of ideas that are utterly removed from real experience.

And this does nothing to disprove the notion of an immaterial soul, which is my point. The Turing Test does not establish what you claimed it establishes.

You are merely reducing thinking to “information processing” without answering the point that I’m making, namely that the human mind apparently does more than simply process information when it combines disparate, abstracted concepts to form new ideas. This is something we do naturally and readily, yet no other material process, including other forms of organic life, appear capable of this. You can claim that it is merely information processing, but it is not apparent how this is so. My argument is that A.I.s demonstrably do not perform the tasks that are associated with an immaterial soul. This is indisputable, as there are no A.I.s that are able to abstract information from experience and combine disparate abstracted concepts to create new ideas. This ability not only doesn’t arise from a silicon environment, but it does not seem reasonable that it arises from an organic environment either. It is not a question of the material doing the processing, it is the fact that material processes are not sufficient to explain this observable phenomena. Until a material explanation is sufficient to explain such an apparently immaterial process, there is no reason to abandon the idea of an immaterial soul.

I am not assuming the existence of an immaterial soul, I’m saying that an immaterial soul is the best current explanation for the apparently immaterial processes of the human mind. The materialist position, on the other hand, requires ignoring available evidence in order to maintain a dogmatic attachment to the proposition that all observed processes have a material explanation.

Peace and God bless!
Nice:thumbsup:
 
I merely demand that if an A.I. is going to be used to disprove the notion of an immaterial soul then it should be capable of performing tasks that are indicative of an immaterial soul. If an A.I. can’t rise to the challenge then it can’t exactly disprove anything about the human mind now, can it?
Quite a few problems here:

#1: it is an assumption that some immaterial soul exists. You can’t even define what an immaterial soul might be, much less offer an epistemological method to show that it exists. There is no “soul-o-meter” which you can point to some entity, press a red button, and then see a dial, which says: “no soul”, or “material soul” or “immortal soul”.

#2: you merely assume that the solution of those presented tasks would necessitate this immaterial soul. On what do you assume that those tasks require an immaterial soul? (whatever that might be/)

#3: most of those problems are WAY beyond what many people could perform. So at most you could prove that most people do NOT have immaterial souls. And not just infants and toddlers. By the way, not even the church dares to say when that “ensoulment” might occur. Once upon a time it was the “first breath”, then it was the “quickening” or some other event. But not any more. It is obvious that the moment of “ensoulment” cannot happen at conception, because twins come into existence when the zygote splits. But the church is silent… due to embarrassment?

#4: the examples of analyzing people’s previous (name removed by moderator)ut and extrapolating what their preferences will be (for the sake of recommendations) is exactly one of those tasks where the AI can connect previous facts and possible new preferences. There is no need for assuming “science-fiction” time loops, presenting unicorns (horse + cow combinations).

#5: a successful AI experiment is not supposed to “disprove” the existence of some immaterial soul. Just like the fact that stage magicians can replicate the results of some allegedly “paranormal” results of some charlatans does not “disprove” the existence of some “paranormal” powers. In both cases they make the assumption of the soul or the paranormal a useless and unnecessary assumption.
I am not assuming the existence of an immaterial soul, I’m saying that an immaterial soul is the best current explanation for the apparently immaterial processes of the human mind. The materialist position, on the other hand, requires ignoring available evidence in order to maintain a dogmatic attachment to the proposition that all observed processes have a material explanation.
And finally, your “explanation” explains nothing. How does this alleged immaterial soul interact with the material brain or the silicon-based computer?
 
Quite a few problems here:

#1: it is an assumption that some immaterial soul exists. You can’t even define what an immaterial soul might be, much less offer an epistemological method to show that it exists. There is no “soul-o-meter” which you can point to some entity, press a red button, and then see a dial, which says: “no soul”, or “material soul” or “immortal soul”.
I’m not assuming the existence of an immaterial soul, I’m concluding that there probably is one based on the evidence. Material processes don’t suffice to explain the workings of the human mind, so until some evidence indicates that material processes suffice we must supply some other answer. The evidence points to an immaterial process, so that is what I’m holding to now. I’m open to evidence that proves otherwise, but none so far exists.
#2: you merely assume that the solution of those presented tasks would necessitate this immaterial soul. On what do you assume that those tasks require an immaterial soul? (whatever that might be/)
No, again I’m concluding that it requires an immaterial soul because material processes can’t reproduce this aspect of the human mind, whether philosophically or experimentally. You are assuming (whether explicitly or implicitly) that matter is all there is, and therefore you are ruling an immaterial soul despite the evidence that shows that material processes can’t account for the activity of the mind.
#3: most of those problems are WAY beyond what many people could perform. So at most you could prove that most people do NOT have immaterial souls. And not just infants and toddlers. By the way, not even the church dares to say when that “ensoulment” might occur. Once upon a time it was the “first breath”, then it was the “quickening” or some other event. But not any more. It is obvious that the moment of “ensoulment” cannot happen at conception, because twins come into existence when the zygote splits. But the church is silent… due to embarrassment?
I’m not sure what problems you are referring to that go beyond what most people can perform. Every socially functional adult can abstract concepts and recombine them to form wholly new ideas, and so can children that are of the age to communicate (though we don’t know exactly when they develop this capability). Some people can’t, whether through physical damage or deficiency, but this doesn’t mean that the capability isn’t within their nature, merely that it lacks expression at this time. The rest of your point here, about when ensoulment occurs, is completely irrelevant to the topic at hand.
#4: the examples of analyzing people’s previous (name removed by moderator)ut and extrapolating what their preferences will be (for the sake of recommendations) is exactly one of those tasks where the AI can connect previous facts and possible new preferences. There is no need for assuming “science-fiction” time loops, presenting unicorns (horse + cow combinations).
I don’t know what examples you’re talking about, but extrapolating preferences from data would be a material process, not at all like what I am talking about. The A.I. in such a case would not be abstracting concepts from one concrete experience, and then recombining several disparate concepts into a new idea that has not been experienced at all. It is precisely this type of activity that goes beyond material processes, while the process you describe is merely a matter of collecting and collating data and running probabilities based on correlations between various points of data; A.I. can do the latter, but this isn’t what I’m talking about.

continued…
 
#5: a successful AI experiment is not supposed to “disprove” the existence of some immaterial soul. Just like the fact that stage magicians can replicate the results of some allegedly “paranormal” results of some charlatans does not “disprove” the existence of some “paranormal” powers. In both cases they make the assumption of the soul or the paranormal a useless and unnecessary assumption.
Yes, I agree. The fact that A.I. comes no where close to demonstrating the kind of abstraction that I’m talking about, and that there is no reason to ever believe that a material process can (based on what we currently know about material processes), the existence of an immaterial soul is not only a safe assumption, it is necessary in order to explain some of the every day mental tasks humans perform, unless one wants to simply take a mystical mindset and say that matter has some unseen and seemingly impossible property that defies everything we know about it. I prefer to stay grounded with the evidence, however.
And finally, your “explanation” explains nothing. How does this alleged immaterial soul interact with the material brain or the silicon-based computer?
The immaterial soul is the form of the mind, defining it and giving it existence. It interacts with the brain in a similar manner to how “circle” interacts with a wheel, though this is an imperfect analogy. It would require a lot more time and energy to explain the underlying philosophy than I am able to provide right now. You might be able to find some good works on the topic of Thomistic realism that go into the subjects of form and matter, and immaterial and material souls, but I don’t know what your level of education is when it comes to such subjects so I don’t know what specifically to recommend.

This paper by a friend of mine might help explain some of the problems and concepts that I’m talking about, and might serve as a good jumping-off point for learning more about these matters:

newdualism.org/papers/C.Fadok/FadokThesis.htm

He presents several more points besides the one that I’ve been leaning on in this discussion, but I’m not as comfortable with them so I don’t bring them up.

For the purpose of this discussion it suffices to simply say that matter does not appear capable of performing the tasks that the human mind routinely performs, and until such time as material processes alone, aside from the human mind (if we are to assume that the human mind might be purely material), are shown to perform these tasks then it is reasonable to assume that something beyond a material process is occurring in the human mind.

Peace and God bless!
 
Material processes don’t suffice to explain the workings of the human mind…
Says who? And what is their “evidence”? And what kind of “explanation” are you looking for?
The immaterial soul is the form of the mind, defining it and giving it existence. It interacts with the brain in a similar manner to how “circle” interacts with a wheel, though this is an imperfect analogy.
I am not interested in analogies (imperfect or not), I am interested how could an immaterial “something” physically interact with a physical substance.
For the purpose of this discussion it suffices to simply say that matter does not appear capable of performing the tasks that the human mind routinely performs…
Does not “appear capable”? Is that your argument?
 
Says who? And what is their “evidence”? And what kind of “explanation” are you looking for?
Read up on the matter, starting with thebpaper I linked to.
I am not interested in analogies (imperfect or not), I am interested how could an immaterial “something” physically interact with a physical substance.
An immaterial subtance can’t physically interact with a physical substance. That is a contradiction in terms. If you are looking for physical interaction then you are not understanding the matter at hand, and you are arbitrarily limiting the scope of interactions.
Does not “appear capable”? Is that your argument?
Again, read up on the matter. Material processes work only with particular points of data and experience, and do not abstract universals from these experiences. A computer does not abstract “circle” from the experience of a coin, for example, and this can be seen by its inability to recombine abstracted concepts from disparate experiences to form new ideas. Beyond that you’ll have to do some reading as it’s a wordy topic and I won’t be going into the depth required to guide you through it as I don’t know your background understanding.

The fact that you’re asking for physical interactions between immaterial and material substances indicates that some more in-depth reading on this area of philosophy would be beneficial before continuing.

I will answer specific questions as I can, but a fair treatment of the immateriality of mind goes way beyond what a few internet posts can cover, especially when the basics haven’t been established yet.

My original point stands, however, which is that the Turing Test does not indicate that an A.I. is doing what an immaterial soul is said to do. The Turing Test merely observes converstation, and it says as much about the observer as it does the subjects. This makes it an inappropriate test of the natures of the subjects, as it does not observe their underlying nature and includes the confounding factor of the observer’s subjective judgement.

Peace and God bless!
 
Does not “appear capable”? Is that your argument?
Just to be clear: I know that computers are currently incapable of performing this type of mental action. No such computers exists, and there is no method for replicating this type of activity with material processes currently. If the organic brain performs these processes, and not some immaterial mind, then we don’t know how as of yet and haven’t been able to identify this process within the brain. This much is simply stating the facts as we know them.

I believe that we will never have a physical explanation of these mental processes, because what I know of material processes precludes this kind of mental activity. I am open to the possibility that I am wrong about this, however, and when actual facts are presented that contradict this belief my mind will be changed. Such facts don’t exist at this time, however, and I’m given no reason to believe that they will as we do not appear to be getting any closer to finding this kind of abstraction in material processes.

Peace and God bless!
 
The technology isnt there yet to create this kind of thing, we are getting close and it will probably be the next major breakthrough, but not yet. Right now, we have hardware and software, when this kind of technology is realized, it will create an entirely new category imo, I believe it will be a sort of merging between biology and computer technology.
 
The technology isnt there yet to create this kind of thing, we are getting close and it will probably be the next major breakthrough, but not yet. Right now, we have hardware and software, when this kind of technology is realized, it will create an entirely new category imo, I believe it will be a sort of merging between biology and computer technology.
We’ll see. It isn’t yet clear that such a technology is even theoretically possible. The type of processing that allows for abstract, universal thought hasn’t been demonstrated with material components on even a rudimentary level. We can’t even demonstrate the organic brain is capable of such things, and haven’t identified how such a process can occur materially.

Peace and God bless!
 
Status
Not open for further replies.
Back
Top