Artificially Intelligent Sociopaths

  • Thread starter Thread starter James_Tyler
  • Start date Start date
Status
Not open for further replies.
OK, but where will they learn the ethics and morals?
The following is an interesting Youtube video of a lecture by the founder of DeepMind, Demis Hassabis. It will give you some idea as to how the most sophisticated AI of today actually learns. Although learning ethics and morals is no doubt magnitudes harder than learning Go, the process should be basically the same. Learning the best way to implement that process however, may take as much learning on the programmers part, as on the programs part.

But if the AI can learn, it should be able to learn ethics and morals. Of course what those morals are, and how it eventually chooses to apply those morals, may be beyond our ability to predict.

youtube.com/watch?v=f71RwCksAmI
We’re dead if the AI does that and determines that is the correct way. People are bad examples.
I agree, people are very bad examples. But then again, how does one learn to be moral, without learning what it means to be immoral.
No, our greatest fear is that they become the Inquisition and put us to death.
But why would the AI choose to put us to death…would it be because it lacks morals, or because we do?
 
Yes! Yes! But can what they have in their minds be anything more than just information? How do they feel what they know? You are human. You know what I mean. You feel all you know too. That place from which you feel. Where is that in a computer’s data? If it doesn’t have that then emotions are just data that is being processed. It does need a heart to be intelligent in a human world. Otherwise it can just program itself into doing anything. Good or bad by human standards. At what point does a computer’s heart break if it has one? Why? Can you program a heart to break or leap with joy? Because a heart is more than a state machine.
I think that there are some things that we’re just going to have to wait to find out. And it may be that we may never find out.

Indeed, at what point does a computer’s heart break? Will we only know that an AI is sentient, when it cries? Will we know even then? Will we know when it sacrifices itself for another? Or when it sacrifices another for itself?

It may not be a question of if these things happen, but when. Would it be better in the meantime, to treat the AI as an equal, because at some point, the AI may be asking whether it should do the same for us?
 
But if the AI can learn, it should be able to learn ethics and morals. Of course what those morals are, and how it eventually chooses to apply those morals, may be beyond our ability to predict.
And if it doesn’t learn, that’s a bug in the system.
how does one learn to be moral, without learning what it means to be immoral.
One can do so. Nobody has to murder someone to know it is wrong to murder.
But why would the AI choose to put us to death…would it be because it lacks morals, or because we do?
Yes.

(this means both)
 
Can emotion be anything more than information to a computer?
But can what they have in their minds be anything more than just information?
I’m not completely sure on what you mean here. There is a tendency for people to view an experience as less real or its significance diminished once it has an explanation. Perhaps from that same tendency one might view an AI’'s reaction to something emotional as being diminished.
Otherwise it can just -]program itself/-] [learn or decide] into doing anything. Good or bad by human standards.
My slight objection to wording expressed above.

A person can decided to do many things good or bad by many standards. (I don’t want to say “anything” because we are bound by constraints). In society we tend to try to setup environments where certain behaviours are discouraged.
At what point does a heart break if it has one? Why? Can you program a heart to break or leap with joy? Because a heart is more than a state machine.
In trying to convert your statement from a metaphorical one to a literal one I guess you are asking for the conditions if any under which an AI is able to react to its situation? That’s going to depend on the individual AI being discussed.
 
Status
Not open for further replies.
Back
Top