What if A.I. became conscious?

  • Thread starter Thread starter YHWH_Christ
  • Start date Start date
Status
Not open for further replies.
As to computers, I suppose it comes down again to mimicking that behavior. But again, if it’s part of the programming, then that’s all it is, mimickery.
The question is still the same: “what is the difference between a good emulation and the original”?
 
Then how can you make the claim “a machine learning is not fundamentally different from human learning” or there is “no upper limit to the tasks it (an AI) can perform”.
Let’s use another analogy, even though we know that analogies are never perfect. A human and a self-driving car can get to the same destination, using different methods. As long as the A.I. performs the same task, it is not relevant “how” it is done. We cannot fly the same way as a bird does, but we can fly using airplanes.

The point is the action which needs to be performed, not the specific methods which is applied.
 
I think we are talking past each other. Its not an issue of any specific task that AI can perform. I get your point that its not relevant when looking at any given task. But you claimed something different more than that: you made a claim that learning is not fundamentally different. The only way to say it is not fundamentally different is to know how both things learn. We do not know that in one of the two cases.
 
40.png
niceatheist:
As to computers, I suppose it comes down again to mimicking that behavior. But again, if it’s part of the programming, then that’s all it is, mimickery.
The question is still the same: “what is the difference between a good emulation and the original”?
I guess that comes down to philosophy. If I were to throw my inexpert hat in the ring, I’d say if, say, something like self-awareness (as I defined above) were to be an emergent property of a system, as opposed to something specifically programmed for, then I’d probably say we’ve gone beyond mimickery. But then again, since we’re not even sure how it works in the brains of those species that seem to possess it, perhaps even our behavior is just an elaborate set of functions that look like self-awareness.
 
“Open the pod bay doors, HAL”

“I’m sorry Dave, I can’t do that…”

That is what will happen, as AI is informed by fallen, flawed humanity. Some of our worst traits will have crept in.
 
I think we are talking past each other. Its not an issue of any specific task that AI can perform. I get your point that its not relevant when looking at any given task. But you claimed something different more than that: you made a claim that learning is not fundamentally different.
I am talking about the process of learning. Accumulating new information, integrating into the existing set of information. Is that not what you were talking about?
 
I guess that comes down to philosophy. If I were to throw my inexpert hat in the ring, I’d say if, say, something like self-awareness (as I defined above) were to be an emergent property of a system, as opposed to something specifically programmed for, then I’d probably say we’ve gone beyond mimickery. But then again, since we’re not even sure how it works in the brains of those species that seem to possess it, perhaps even our behavior is just an elaborate set of functions that look like self-awareness.
Think about “awareness”. Look at the birds in the Arctic. During the short summer there are millions birds nesting and raising their young ones. Nests right next to each other… little ones in every one of them. And the parents need to identify their own nests and their chicks. That requires a huge amount of information processing, they must be “aware” of the surroundings. Awareness and self-awareness are not that different. “Merely” recognizing that “I” am here and “you” are there. Of course I am simplifying it a little.
 
It sounds to me that you are talking about the definition of learning. You can call it the process if you want, but its a definition. I have some financial analysis software that I wrote myself and use on a regular basis for my job. It is able to go out accumulate new information, and integrate it with the existing set of information, and make decisions based on that. There is absolutely nothing “AI” about the software. Oh, it could not have been written 20 years ago, because of its hardware and memory and internet requirements, but as for software theory invoved, all very plain jane. Yet, it follows your process/ definition of learning. I can assure you it is quite limited in its “intelligence”. So just a definition of learning is not really adequate if we are discussing the potential of AI and its capability with respect to the human mind. Only then can we say there is, or is not, something fundamentally different. I can tell you, there is something fundamentally different between the software I just described and the AI machine which learned the AlphaGo computer developed by google labs that has learned to beat anyone in Go. I know that not just because of what they do, but because I know how they do it.
Now, you and I do not know how our human brain learns. So we cannot say that AI is fundamentally the same, which implies it has potentially the same capability (an implication I am quite certain you intent to give).
 
It sounds to me that you are talking about the definition of learning. You can call it the process if you want, but its a definition.
What is the difference? When written on paper, it is a definition. When executed, it is a process. 😉
Now, you and I do not know how our human brain learns.
We don’t know ALL the exact details. We do know that the neural network changes every time it processes the new information. Every experiment performed on a human brain will substantiate this hypothesis. It will never “prove” it, because the concept of “proof” is only applicable to formal, axiomatic systems.

There is no experiment (for the time beig) that would substantiate the “ghost in the machine” hypothesis. So it is a useless hypothesis.

My basic approach is simple. If it works, then it is fine, even if we don’t know exactly HOW it works.
 
We don’t know ALL the exact details. We do know that the neural network changes every time it processes the new information. Every experiment performed on a human brain will substantiate this hypothesis. It will never “prove” it, because the concept of “proof” is only applicable to formal, axiomatic systems.
We don’t know enough detail determine if our attempts at AI are fundamentally the same or of the same capabilities. We understand how neurons fire, how they are interconnected through synapses, etc. But we do not understand even the basics of human thought, how all of those firings of neurons turn into our decisions, our memories, our rational thoughts. So how can we say that AI has the potential to do what we do? How can we say it is fundamentally the same? We don’t even know what we are comparing AI to.
 
We don’t know enough detail determine if our attempts at AI are fundamentally the same or of the same capabilities. We understand how neurons fire, how they are interconnected through synapses, etc. But we do not understand even the basics of human thought, how all of those firings of neurons turn into our decisions, our memories, our rational thoughts. So how can we say that AI has the potential to do what we do? How can we say it is fundamentally the same? We don’t even know what we are comparing AI to.
That’s pretty much where I stand. Trying to replicate a black box, without knowing the inner workings of that black box, is well nigh impossible. If it happens, it will be by accident, and I’m not going to hold my breath waiting for a Skynet-like event where some massive AI computer suddenly starts singing Daisy Bell.
 
Free-will even for the human being, is to an extent a mirage. Our behavior being heavily conditioned by our situation, by our physical soma, and by our subconscious mind, we think we are freer than we are.

But admitting that we are not perfectly free is intolerable to the philosophical crowd, because we’d need to reevaluate our thoughts about morality, about justice, and even to an extent, about religion.

And also to the relationship between free-will and mind. You could have an AI with no free-will at all and it would resemble us more than we’d want to admit.

ICXC NIKA
 
There are programs that write their own code. Some of the current AI is pretty good at learning. As it is now, they learn only specific tasks, but they are very good at it. Google deep minds Lela is an example. It learned to play chess in a few hours with just the basic rules of the game. All the previous chess engines needed opening books programmed into them along with a huge database of past games played. Lela started from scratch and and learned to play to the point that no chess engine can beat it.
 
Free will is a necessary concept because as soon as you abandon it you end up in an absurd position in which everything is predetermined, including every word in this discussion. The end result is the conclusion that reason is also an illusion because even our apparent rational arguments and irrational arguments were predetermined. So some people were predetermined to believe and argue for a position, and others were predetermined to argue the exact opposite position. If there is no free will then it is very ironic that we are arguing about anything, especially about free will and rational thought since they don’t exist.
 
All probability is predetermined; we just don’t have enough information to make the exact prediction. Probability says that a coin flipped 100 times should show heads about the time and tails the remaining half. But if we had enough knowledge of the physics of the coin in question and the kinimatics, then we should be able to predict every flip. I don’t see how you make it an alternative to free will and determinism. Are there papers written on the subject that use probability as a solution?
 
That seems to be just an assertion that is contrary to physics. Physics claims that all things follow laws, and those laws mean predictability.
 
I have to consider how it would apply to anything larger than an atom and the free will/ determinism discussion, but it seems you are still in the same absurd position of having neither free will or reason in the end. What happens happens, and what people happen to believe they were determined to believe whether it is predictable or not. You are still determined even if you can’t calculate the prediction. You can’t make a conscious change to your life. You will make certain decisions no matter how probable, whether you like them or not. There is a certain probability that you will go down one path, and a certain probability you will go down another. In other words it seems to be just another form of determinism. It is a more nuanced version in which you have introduced an aspect of time.
 
Last edited:
Status
Not open for further replies.
Back
Top