What if A.I. became conscious?

  • Thread starter Thread starter YHWH_Christ
  • Start date Start date
Status
Not open for further replies.
Ask Elon Musk if my memory serves me right he’s pretty paranoid about that sort of stuff.

( just try not to give him any of your money 😉)
 
Last edited:
AI can’t exist unless programmed.

And without understanding what consciousness even is (methinks it’s far more human-bodily than cognitive or informational) it probably isn’t going to be programmed.

ICXC NIKA
 
Not necessarily. A true AI might include only the requisite hardware, then accept (name removed by moderator)ut/stimulus from people / other robots. Akin to how a human learns.
 
Akin to how a human learns.
You cannot emphasize this strongly enough. Amazing that people are still ignorant of the self-educating properties of the new generation A.I.-s. AlphaZero was able to master chess in a few hours, by playing against itself. No external tuition was given.
 
As a programmer, I find the idea of AIs gaining a conscious hard to believe, even in a hypothetical scenario. Computers are pretty stupid: they’re so great because they’re very quick at doing exactly what they’re told. Without code they’re just a brick.
The science fiction (if that’s any guide at all) is usually that self-awareness happens as an accident; due to some complex computer system gaining consciousness apart from any particular plan of its creators (think Terminator’s Skynet here). The alternative is 2001’s HAL 9000, but at least in the books, Clarke makes pretty clear that by and large HAL’s consciousness is more of a trick, certainly orders of a magnitude more complex than any AI we have today, but still by and large functioning as a very complex set of programs that mimic a sentient being. The only glimmer of anything like sentience in HAL comes when he is given orders to hide the existence of the monolith on the Moon, thus creating a conflict between his core functionality to report everything it knows accurately to the human crew.

Whether any of this is actually useful guide is beyond me. As you state, computers are pretty stupid, and generally any Turing-complete computer may gain in speed and overall capacity, still by and large functions pretty much like electronic computers have since the 1940s, indeed since the first real Turing-complete design; Charles Babbage’s Analytical Engine back in the 19th century.

I won’t hold my breath for any true AI in the sense of being self-aware and conscious. The question at this stage is, strictly speaking, is anything like that even necessary. If we could build a HAL 9000, do we really need it to be “truly” self-aware? It would be an extraordinarily powerful tool even as it was.
 
Just WHAT is self-aware? The only way we can tell is that the conversation partner (and we don’t know if that partner is a human or an A.I. uses the words “I” and “you” (etc.) in a syntactically and semantically correct manner.

The good old duck principle is applicable here (as always): “if it looks like a duck, walks like a duck, tastes like a duck, quacks like a duck… it is very probably a duck”.

In other words, what is the difference between a real McCoy and very good simulation of it?
 
Last edited:
But it was programmed to learn chess. It did not find a webpage with chess instructions and tell itself “get, this looks like a fun game”.

A.i. has gotten impressive. No doubt. But it is not conscience, it cannot do things we take for granted. It cannot, on its own, decide to bake a cake, or post a response to this web forum. They are still programmed for specific tasks.
No computer has a mind. Nor is any such artificial mind on the horizon.
 
But it was programmed to learn chess.
Humans learn the same way. A child is programmed - taught to play chess.
But it is not conscience, it cannot do things we take for granted.
What is “conscience”? Self-awareness is different. Where is the “conscience” in a feral child?
No computer has a mind. Nor is any such artificial mind on the horizon.
What is a “mind”? It is the activity of the brain or the processing unit. Same stuff.
 
Humans learn the same way. A child is programmed - taught to play chess.
That’s not what I mean. A computer does not decide to play chess. A child may not decide, but his parents do. A computer does not sit around and decide to learn chess or for another to learn chess. A group of computers do not get together and decide to study the origins of computer science so that they can better understand themselves, or to study their own neural networks so that they can make themselves better, or to entertain themslelves with games.
What is “conscience”? Self-awareness is different. Where is the “conscience” in a feral child?
I types too fast, I meant to write conscious, not conscience. I was referring to self-awareness. A rational understanding of who we are. But, my mistake is somewhat fortuitous, as conscience, ie an inner knowledge of right or wrong, is another key example of how far computers are from having a mind.
What is a “mind”? It is the activity of the brain or the processing unit. Same stuff.
We do not know if it is the “same stuff”. I personally am a dualist, that is my belief, and admittedly why I believe “true AI” will not arrive or be possible. Now, I do not have direct empirical evidence of this, neither do you have direct empirical evidence for saying the activity of the brain is all there is to a mind.

But we need not delve into the philosophical aspects of the mind to know that computers are a long ways from achieving such a thing, and indeed have not really been on the right path. Artificial intelligence has been a field of computer science for a long time. I took my first AI class in the mid 80s, and I studied it more in graduate school in the 90s. It is a moving target, AI is simply applying computers and techiniques to things that programmers had not figured out how to do previously. In the 80s, most of it was game theory. In the 90s, natural language processing was considered part of AI (not to mention voice recognition. Now, SIRI can process most of what we say. That would have seemed very hard to do in the mid 90s to late 90s. So AI keeps advancing. But it is always targeted to specific tasks.

The danger of AI is not that it will overtake us. The danger of AI to our culture is different. You stated above “if it looks like a duck, walks like a duck, tastes like a duck, quacks like a duck… it is very probably a duck”. That is the danger. SIRI does not pass the turing test, but it comes closer, as a commonly available tool, than anything people expected 20 years ago. I think we have assuming these tools, as they advance, are too much like us. Of anthropomorphize computers. That will not be good for our culture, not at all.
 
That’s not what I mean. A computer does not decide to play chess. A child may not decide, but his parents do.
Sure. But the point is machine learning is not fundamentally different from human learning. Do you talk about a totally “tabula rasa” situation, where an isolated A.I. will just “sit there”, and will not exhibit any desire for self-improvement? Because even that is not necessarily true. And a feral child would never get past the “opening stages” of development.
I was referring to self-awareness.
No problem. Happens to me too. But you cannot know if another being has self-awareness, unless it exhibits the “symptoms” of self-awareness via communication.
We do not know if it is the “same stuff”. I personally am a dualist, that is my belief, and admittedly why I believe “true AI” will not arrive or be possible. Now, I do not have direct empirical evidence of this, neither do you have direct empirical evidence for saying the activity of the brain is all there is to a mind.
Well, it depends how you measure the activity of the brain. Even small interferences with the electro-chemical activity of the brain will influence the mind. And when the interference stops, the mind will get back to “normal”.

Our most important “feature” is information processing. How it happens is of no importance. It can be via neural networks or transistors.
Now, SIRI can process most of what we say. That would have seemed very hard to do in the mid 90s to late 90s. So AI keeps advancing. But it is always targeted to specific tasks.
Sure, but there is no upper limit to the tasks it can perform. Remember WATSON, which beat the best human players in JEOPARDY. These days it is diagnostician “who” can help out thousands of doctors residing all over the globe. The A.I.-s can be expanded and give new tasks.
The danger of AI is not that it will overtake us. The danger of AI to our culture is different. You stated above “if it looks like a duck, walks like a duck, tastes like a duck, quacks like a duck… it is very probably a duck”. That is the danger.
Why would it be a danger? If some A.I. will be sufficiently like a human, all the way to successfully pass the Turing test, it will be a great step toward a better world. What you say sounds like the opposition to any and all advancement, arguing that A.I.-s will replace us, and eventually we become obsolete. There are two answers to this problem. One is that the world is sufficiently complicated to present a “Lebensraum” for both of us. The other one is somewhat facetious: humans are lousy creatures, both biologically and informatically. If they would die out, the next conscious beings can hardly be worse.
 
But the point is machine learning is not fundamentally different from human learning.
Well, we do not know exactly what happens in the brain for learning to take place. We know exactly what happens in a machine that has been programmed to learn. So its hard to support the statement “machine learning is not fundamentally different from human learning”.
Our most important “feature” is information processing. How it happens is of no importance. It can be via neural networks or transistors.
Its of no importance? Yet you make claims about how it works.
Sure, but there is no upper limit to the tasks it can perform.
How do you know this? Do you believe it will someday be able to solve the travelling salesman problem efficiently?
 
Last edited:
Well, we do not know exactly what happens in the brain for learning to take place.
It depends on the technical details. A whole lot, maybe the overwhelming majority happens in the “white cells”, the subconscious. Both in the organic and unorganic environment there is an “(name removed by moderator)ut device”, which collects the external data, the processing “unit”, and the storage. The technical details are not important.
Its of no importance? Yet you make claims about how it works.
Let’s just use a simple analogy. The information can be stored on paper, in come electronic media, in a neural network. The technical details are interesting, but irrelevant.
How do you know this? Do you believe it will someday be able to solve the travelling salesman problem efficiently?
Efficiently yes, exactly no. But that is irrelevant. I was talking about the types of problems. They can be mechanical, biological, medical, physical… etc. If a specially designed or “grown” A.I. can solve them, then they can be combined into a “super-A.I.” In humans we have musicians, artists, designers, math prodigies, people with phenomenal memories. There is no reason to doubt that such capabilities cannot be combined into one person, or one A.I.

I am not sure what your questions are.
 
Just WHAT is self-aware? The only way we can tell is that the conversation partner (and we don’t know if that partner is a human or an A.I. uses the words “I” and “you” (etc.) in a syntactically and semantically correct manner.

The good old duck principle is applicable here (as always): “if it looks like a duck, walks like a duck, tastes like a duck, quacks like a duck… it is very probably a duck”.

In other words, what is the difference between a real McCoy and very good simulation of it?
The standard test for self awareness is the classic dot on the forehead test. If you put a sticker on the forehead of, say, a dog, and show it to him in a mirror, he doesn’t seem to have the cognitive hardware to recognize that the sticker on his head is actually attached to his head. There are a few animals, however, that do seem to possess this kind of self-awareness; the great apes, dolphins and elephants. The most likely explanation is that these species, like humans, live in fairly complex social structures, so being able to tell “self” from others is likely of a great benefit.

As to computers, I suppose it comes down again to mimicking that behavior. But again, if it’s part of the programming, then that’s all it is, mimickery. Much as you could probably train a dog to act as if the head in the mirror in question is his, it’s just conditioning, it’s not an inherent quality of the dog’s mind.

I think, to a large extent, researchers have backed away from AI as an analog of human behavior. Machine learning is a related, but somewhat different discipline, and in this field, computers have taken enormous strides. But really, describing that as AI is more of a marketing strategy than an accurate description of how such systems work.
 
Let’s just use a simple analogy. The information can be stored on paper, in come electronic media, in a neural network. The technical details are interesting, but irrelevant.
The technical details are not important, they are not known. Then how can you make the claim “a machine learning is not fundamentally different from human learning” or there is “no upper limit to the tasks it (an AI) can perform”. Look, I admit some of my views are philosophically based, and while I believe them to be supported by empirical evidence, I do not see them in any way as being scientifically shown to be the case. But you make extraordinary claims about what an AI can accomplish, yet you do not have any evidence of this.
Efficiently yes, exactly no. But that is irrelevant. I was talking about the types of problems. They can be mechanical, biological, medical, physical… etc. If a specially designed or “grown” A.I. can solve them, then they can be combined into a “super-A.I.” In humans we have musicians, artists, designers, math prodigies, people with phenomenal memories. There is no reason to doubt that such capabilities cannot be combined into one person, or one A.I.
There is every reason to believe many of the things we accomomplish as humans cannot be combined into one AI. Earlier in the thread, a poster was talking about how computers can now program themselves, ie write code. If anyone knows the complexity that goes into a large software project and then sees the very specific ways computers can “write code”, they would realize there is a light year of difference. Again, computers do not sit around and decide what to do and then how to do it. We do. That is the difference. It is a world of difference.
 
But really, describing that as AI is more of a marketing strategy than an accurate description of how such systems work.
A professor gave me a good definition of AI around 3 decades ago, and I think it still applies. Artificial intelligence is the area of computer science where we are still trying to figure out how to get computers to do a given task. After it has been solved, then we move on. My example of natural language processing is very good. In the 80s and 90s, this was commonly thought of an area of AI research. Now, everyone has a tool on their phone which does it as well as any AI researcher 30 years ago would have dreamed possible. Hence, its no longer considered AI.
 
Status
Not open for further replies.
Back
Top