What if A.I. became conscious?

  • Thread starter Thread starter YHWH_Christ
  • Start date Start date
Status
Not open for further replies.
Y

YHWH_Christ

Guest
I’m not saying it’s possible or impossible, but what if there is a hypothetical future where we created super advanced A.I. and proved through tests that these things can be considered conscious beings. What would the implications for Christianity be?
 
I’m not saying it’s possible or impossible, but what if there is a hypothetical future where we created super advanced A.I. and proved through tests that these things can be considered conscious beings. What would the implications for Christianity be?
The same as for everybody else. A conscious AI would be able to program a more intelligent version of itself, and that version could program one even more intelligent and so on. The end result would be based on the goals programmed into the original AI – which could be disastrous.
 
but what if there is a hypothetical future where we created super advanced A.I.
True AI wouldn’t be created, it would be discovered. It could be that in some physical pattern we will discover the actuality of a consciousness or intelligence. We would call it “artificial” only because we constructed the pattern, but consciousness was always a principle inherent in that particular pattern; not something created by humans.
What would the implications for Christianity be?
None. The mind body problem would still exist for AI just like it exists for humans; and theists would still argue that some form of mind/body dualism is the correct answer.

The only change that comes to mind is that we would be asking the question of whether or not it was our destiny to discover AI, since it is not something that has resulted naturally and quite possibly would not have if we didn’t exist.
 
Last edited:
Let us imagine one of the super AI’s goals was to eliminate war. And the super AI machine thought, “It’s these humans who fight wars. If I just kill them all, I’ll have accomplished that goal.”
 
Let us imagine one of the super AI’s goals was to eliminate war. And the super AI machine thought, “It’s these humans who fight wars. If I just kill them all, I’ll have accomplished that goal.”
I could of swore that i saw this in a movie…

Any way, the only way to sought out that problem is a worldwide EMP pulse (assuming it some-how got on the internet), and then it’s back to the stone age. lool. Anyone for a game of cards?
 
How will you launch an EMP when the Super AI controls all launch systems?
 
How will you launch an EMP when the Super AI controls all launch systems?
Well…we all knew it was going to end one day, but it might have a will for self preservation which would limit the number of options it has for destroying all of us. Apart from that…see you in heaven.
 
Last edited:
Why not take preventive or protective steps now, while we still can? There are organizations dedicated to controlling Super AI.
 
As a programmer, I find the idea of AIs gaining a conscious hard to believe, even in a hypothetical scenario. Computers are pretty stupid: they’re so great because they’re very quick at doing exactly what they’re told. Without code they’re just a brick.
 
Computers are pretty stupid: they’re so great because they’re very quick at doing exactly what they’re told. Without code they’re just a brick.
Yup. But there are programs that can write code – not all that super yet, but they can do it.
 
And to do that they need code. In the end they’ll do what the programmer tells them to do.
But what has the programmer told them to do? Suppose the programmer told them to hack into Wall Street’s computers and drive stock prices?
 
Why not take preventive or protective steps now, while we still can? There are organizations dedicated to controlling Super AI.
True, but we need to ask what it would really mean to create AI. As i said in a post before, we would not create true AI, we would discover it, probably by accident, because A-priori we
don’t really know or understand what it means for something to have an intellect and a will; we just know they exist. So the idea of controlling an AI once it exists is naive in my opinion.

If we are really going to treat it like a true intellect, then we would have to reason with it.

One thing we can do is not allow it access to the launch codes or the internet.
 
Last edited:
But what has the programmer told them to do? Suppose the programmer told them to hack into Wall Street’s computers and drive stock prices?
I’m not sure what your question is. “What has the programmer told them to do” is an extremely broad question. You kind of answered it in your second sentence: it can hack into Wall Street and raise stock prices. That’s what it can do. Some black hat programmer violated ethics to do something bad.
 
Last edited:
I’m not sure what your question is. “What has the programmer told them to do” is an extremely broad question.
You said, “In the end they’ll do what the programmer tells them to do.” And I admit that is pretty broad. But having said that, you are out on a limb – what WILL the programmer tell them to do? And will all programmers be benign and honest?
 
But having said that, you are out on a limb – what WILL the programmer tell them to do? And will all programmers be benign and honest?
They’re not all benign and honest. I never claimed they were. “Will” is also very broad: I could code something as basic as a console calculator, or do something more complex that’s thousands of lines, talks to a server, has a UI, etc. It’s like asking if all parents are benign and honest, but unlike parents, a program behaves according to its instructions. It will not operate outside of what is written in it: unlike what I, Robot came up with when it talked about “phantom code”, what I write, I write.
 
Last edited:
They’re not all benign and honest. I never claimed they were.
Which is my point – by malice, desire for power, or simple failure to think ahead, some future programmer may open the bottle and let the genie out.

Have you read Life 3.0: Being Human in the Age of Artificial Intelligence, by Max Tegmark?
 
Last edited:
Which is my point – by malice, desire for power, or simple failure to think ahead, some future programmer may open the bottle and let the genie out.
Malice, greed, and/or shortsightedness will not grant a computer the ability to make decisions that a human conscience can. It doesn’t make sense.
Have you read Life 3.0: Being Human in the Age of Artificial Intelligence, by Max Tegmark?
I have not, no.
 
Last edited:
Status
Not open for further replies.
Back
Top