What if A.I. became conscious?

  • Thread starter Thread starter YHWH_Christ
  • Start date Start date
Status
Not open for further replies.
You know I’m tempted to reply in kind. It’s like saying an artillery shell can never hit the wrong target because it cannot be fired without a human telling it what to do.
It depends on what you mean by “wrong”. Bugs can exist in code where the shell doesn’t hit the target it was supposed to. There can be a hardware malfunction. There can be user error where the user picks the wrong target. There’s multiple types of “wrong”.
 
Last edited:
It depends on what you mean by “wrong”. Bugs can exist in code where the shell doesn’t hit the target it was supposed to. There can be a hardware malfunction. There can be user error where the user picks the wrong target.
I think you’re making my argument for me.
 
I think you’re making my argument for me.
I don’t even understand what your argument is. You seem to act like code can just be churned out of a computer like a printer prints a report.
 
Last edited:
I don’t even understand what your argument is. You seem to act like code can just be churned out of a computer like a printer prints a report.
When did I say that?

I said computers can write code. They can.

I said there is a possibility as they get more complex they may write code we don’t want.

I said there could be serious consequences.
 
Trying to make such a thing is like building the Tower of Babel. Obviously, it was physically impossible to build a tower tall enough to reach God’s Throne Room, but God still considered it significant enough to destroy, lest the people further magnify their evil. It is possible, though by no means certain, that the time of great distress spoken of by the Scriptures which precedes Christ’s Second Coming will be caused by men creating such AI and God leaving us to suffer the result of our own folly.
 
Last edited:
I said computers can write code. They can.

I said there is a possibility as they get more complex they may write code we don’t want.
Saying they can write code is simple enough. Saying they can write complex code is simple enough. You seem to act like a computer is just going to decide to hack Wall Street for no reason. How do you imagine it’s going to do that besides “code”? A bug? A bug is going to cause it to write code to hack Wall Street? Somehow it will write simpler programs without this issue coming up neither during testing (which would likely take years) nor its initial release (which would also last years), and not a single issue will arise, and yet somehow it will create a program that targets the systems Wall Street uses on accident? So somehow this computer is able to not only understand code and its own architecture well enough that it can program in machine code on its own processors, but it apparently knows the architecture of Wall Street’s systems well enough to exploit them on accident. All the while this incredibly complex code goes completely outside the notice of the developers who would likely be checking this machine’s code since it’s for some other extremely complicated task, and during their likely years of reviewing and testing, they failed to notice such a gaping error. Or they were stupid enough to let it run freely yet were smart enough to teach it to code. And we haven’t even discussed the code written by other developers used by Wall Street.

Do you see where my incredulity comes from?
 
Saying they can write code is simple enough. Saying they can write complex code is simple enough. You seem to act like a computer is just going to decide to hack Wall Street for no reason.
How do you come up with that? When did I say that?
 
How are they going to write code we don’t want when it’s been presumably not coded to do so?
But if I write the code, I may program it to do things you don’t want it to do. So you can’t presume that at all.
 
But if I write the code, I may program it to do things you don’t want it to do. So you can’t presume that at all.
Ah, so you want it to be a Stuxnet that writes itself, a worm that took the combined resources and brain banks of both Israel and the United States. And this program is just going to whip that up?
 
Ah, so you want it to be a Stuxnet that writes itself, a worm that took the combined resources and brain banks of both Israel and the United States. And this program is just going to whip that up?
When did I say that? Making up an argument and then attributing it to me is not proper debate.
 
When did I say that? Making up an argument and then attributing it to me is not proper debate.
You say someone is presumably going to write a program which will write a program targeting Wall Street. Stuxnet was a computer worm designed to target Iran’s nuclear systems, and that’s what it took to make it.
 
Last edited:
Do you see where my incredulity comes from?
I certainly don’t. No one and nothing exists in a vacuum. Every entity needs some starting point, human or A.I. What would a feral child come up with? Nothing at all.
 
You say someone is presumably going to write a program which will write a program targeting Wall Street. Stuxnet was a computer worm designed to target Iran’s nuclear systems, and that’s what it took to make it.
Complete non sequitur. Programs have already been written targeting Wall Street. Such programs will presumably become more powerful and capable as time goes on. Computers may be employed to write the code – and the complexity makes unintended consequences more likely – even ignoring the evil intent of the original programmer.
 
Complete non sequitur. Programs have already been written targeting Wall Street.
Those programs weren’t written by another program. I would expect code that could write code to hack Wall Street would at the bare minimum require what Stuxnet had. And even then I doubt they could pull it off with fifty years.
Such programs will presumably become more powerful and capable as time goes on.
And so will security.
Computers may be employed to write the code – and the complexity makes unintended consequences more likely – even ignoring the evil intent of the original programmer.
There it is again, your answer is the computer may just “code” it up. Somehow this computer will go from hacking Wall Street, which would require a narrow enough focus as it is, to somehow growing beyond its specifications to something worse. And these hackers are too stupid to even review what their program is doing since it would then likely not be doing what they told it to do.

You have no idea of what it would take, other than “code” and “complexity”. Somehow programs that can probably write loops and Hello Worlds will grow to crack the ever-improving security designed to protect the financial security of first world nations by “code”, skills which only a very small minority of programmers have, and even fewer if any with the financial backing it would take to pull this off. This is vague fear mongering.
 
Last edited:
You will not be able to reason with the super AI. For one it is completely disconnected from humanity and completely unlike us. There is nothing similar.

Second you would be like an ant compared to a human.

Third, I think the creation of conscious AI will never happen. Consciousness isn’t a gradual thing. It is a leap from nothingness to something. Essentially it is something from nothing. Adding one more line of code will never produce consciousness. Either it was always conscious or it never will be.
 
Third, I think the creation of conscious AI will never happen. Consciousness isn’t a gradual thing. It is a leap from nothingness to something. Essentially it is something from nothing. Adding one more line of code will never produce consciousness. Either it was always conscious or it never will be.
Valid point, but just for the sake of argument let’s assume that you’ve created an AI that could pass the Turing test. Of course that wouldn’t prove that it was conscious. But let’s assume that you really want to know if your AI is conscious. After all, there might be some things that are ethical to do to a mere machine, that aren’t ethical at all once that machine is conscious. So what could you do to determine if your “machine” is conscious?

Well one conceivable option might be to test whether or not your AI has free will. If it has free will, then we might logically assume that it’s conscious.

But how do we do that? How do we test whether our AI has free will? It would actually seem to be fairly simple. We present it with something that a conscious mind might find desirable, ohhh I don’t know, perhaps the knowledge of good and evil. We then program our AI with a command strictly forbidding it from doing the one thing necessary to attain that knowledge, like say, eating an apple.

Now if our AI overrides its programming, and does what we specifically programmed it not to do, then we can only assume that it has free will, and if it has free will, then we must also assume that it’s conscious. But an AI with the ability to override its own programming might lead to unintended and detrimental consequences, so steps might need to be taken to lead our AI through its emerging consciousness, and save it from the cognitive dissonance that might otherwise drive it insane.

But hey, who would believe a story like that?
 
Status
Not open for further replies.
Back
Top