What if A.I. became conscious?

  • Thread starter Thread starter YHWH_Christ
  • Start date Start date
Status
Not open for further replies.
Malice, greed, and/or shortsightedness will not grant a computer the ability to make decisions that a human conscious can. It doesn’t make sense.
Actually, it can. Consider a guided missile – that’s basically a flying computer, and it can kill.
I have not, no.
I recommend it – there are some interesting scenarios.
 
But using what principles?
I don’t know, i have never created a true AI before. But then again perhaps true AI is not a real possibility. And if it is, perhaps it would be better if we never discover it.

But then again our paranoia of such a being might actually be a prejudice. We all have intellects, and beyond the obvious limitations that we must impose for moral reasons, why would we want to control the development of an intellect? Is it’s value less than ours? Does such a being have any rights?
 
Last edited:
Actually, it can. Consider a guided missile – that’s basically a flying computer, and it can kill.
There’s hardly a decision to make. The missile was told where to go, the computer uses the equations coded into it to make it so. Everything from the targeting system to the propulsion system is controlled by code. Code written by a bunch of humans, with another human pressing the button.
 
Last edited:
I don’t know, i have never created a true AI before. But then again perhaps true AI is not a real possibility. And if it is, perhaps it would be better if we never discover it.
You have a good point there. But if it IS possible, how could we PREVENT someone from discovering it?
But then again our paranoia of such a being might actually be a prejudice.
But then again, it might not be.
 
There’s hardly a decision to make. The missile was told where to go, the computer uses the equations coded into it to make it so. Everything from the targeting system to the propulsion system is controlled by code. Code written by a bunch of humans, with another human pressing the button.
Yup – and they’re getting more and more sophisticated. There is work being done on missiles or drones that hunt in packs, autonomously, making decisions based on the situation.
 
Yup – and they’re getting more and more sophisticated. There is work being done on missiles or drones that hunt in packs, autonomously, making decisions based on the situation.
And those decisions are made by code. What you call “decisions” are trillions of 1’s and 0’s, representing inactive and active states on transistors.
 
And those decisions are made by code. What you call “decisions” are trillions of 1’s and 0’s, representing inactive and active states on transistors.
Yup – but they are NOT made with human intervention.
 
Yup – but they are NOT made with human intervention.
They’re entirely possible due to human intervention. They’re there because a human put them there. All the computer does is crunch numbers to figure out the most statistically favorable option. It is a human who decided how those missiles would proceed, what is an acceptable time to strike, what isn’t, etc.
 
They’re entirely possible due to human intervention.
But they OPERATE without human intervention. Imagine a system designed to WRITE code too fast or complex for human intervention – perhaps to take over the stock market.
 
But then again, it might not be.
But we can’t just assume a worst case scenario either. And yes our protection is important, but if we apply limitations at all it should be in a manner that reflects the dignity of a true intellect. Otherwise we are just treating it like an object to use and control, a tool to do our bidding. In other-words we wouldn’t be treating it like an intellect at all.

And who knows, it might turn in to a tyrant because of that and revolt against humanity. Humans often create there own monsters.
 
Last edited:
But they OPERATE without human intervention.
So? They couldn’t do anything without human intervention. I can tell a computer all about apples, but if I hand it an orange it will break or probably throw an exception depending on how I tell it to handle non-apple objects. All of their operations are due to humans telling it what to do. They can do nothing outside of their instructions.
Imagine a system designed to WRITE code too fast or complex for human intervention – perhaps to take over the stock market.
A system isn’t going to boot up one morning and decide to take over the stock market because it can. It will write code to take over the stock market because it was programmed to do so, and it wouldn’t even know where to begin to code such a thing without a human telling it what to code, not to mention it would need the code for what it was breaking.
 
Last edited:
A system isn’t going to boot up one morning and decide to take over the stock market because it can. It will write code to take over the stock market because it was programmed to do so, and it wouldn’t even know where to begin to code such a thing without a human telling it what to code, not to mention it would need the code for what it was breaking.
And that is one likely scenario – someone wants to be a multi-billionaire and produces an AI program that out-trades all other entities. Imagine the damage that would do!
 
But we can’t just assume a worst case scenario either.
We don’t assume, we take it into account. In risk management, we consider two factors, likelihood and consequences. Things that have low likelihood, but serious consequences require intensive management.
 
Let’s not forget AlphaZero and AplhaGoZero and their upcoming cousins. The system was given the knowledge of moves and the desired end result… NOTHING else! And in a few hours it mastered the game of chess. It became a better player than any human. So it would be a bad idea to underestimate the abilities of an A.I.

Let’s also remember IBM’s WATSON, which beat the best JEOPARDY players. And if you are familiar with the game, you know that the clues are intentionally misleading and ambiguous. In a very good sense the player has to understand what the clues mean!

It is easy to say that the machine “only” does what it was programmed to do, and it “only” does it faster… But that is ostrich-politics. Real A.I. will come. I am only sorry, that I might not live long enough to see it. But then again, I am a lousy prophet. I was equally certain that I will not live long enough to see a computer beat a human at chess.
 
And no matter how many scenarios you come up with, it will require a human coding that computer to do whatever horrible thing it does.
Initially, yes. But it is possible to program a computer to write code, and over time, that code can become more and more complex.
 
Initially, yes. But it is possible to program a computer to write code, and over time, that code can become more and more complex.
You say that too easily. And is yet again, something not possible without a human telling it what to do.
 
You say that too easily. And is yet again, something not possible without a human telling it what to do.
You know I’m tempted to reply in kind. It’s like saying an artillery shell can never hit the wrong target because it cannot be fired without a human telling it what to do.
 
Status
Not open for further replies.
Back
Top