Artificially Intelligent Sociopaths

  • Thread starter Thread starter James_Tyler
  • Start date Start date
Status
Not open for further replies.
J

James_Tyler

Guest
Wouldn’t a fully realized artificially intelligent machine be a complete sociopath or something?

A machine would have no real emotions but only be able to simulate emotions based on programming. An emotion would just be data. Data is not emotion. Emotions are feelings. Feelings are not data.

I can only see an attempt to create a sociopath. It might appear friendly but it isn’t real. It feels nothing.
 
Well, this goes back to our Judaeo-Christian understanding of the uniqueness and value of the human being. It’s obviously emphasized in the writings of early theologians and in Sacred Scripture and Tradition.

A robot cannot have true, living feelings and emotions as we do. It cannot worship God, it cannot find itself sorrowful or joyful as we can. Only a human being can do that (or an angel, but that’s seemingly irrelevant).

So robots are not intended to be members of society in the sense that we strive for a loving, hopeful community not just as Christians but as humans. They may be for work, and for managing certain things, but not for humane interactions.
 
“Open the pod bay doors, HAL.”

“I’m sorry, Dave. I’m afraid I can’t do that.”
 
Wouldn’t a fully realized artificially intelligent machine be a complete sociopath or something?

A machine would have no real emotions but only be able to simulate emotions based on programming. An emotion would just be data. Data is not emotion. Emotions are feelings. Feelings are not data.

I can only see an attempt to create a sociopath. It might appear friendly but it isn’t real. It feels nothing.
It need not be fully realized (here I am assuming that “fully realized” refers to “strong AI”).

One of the first things covered when I took a course on AI in college is that success criteria and metrics need to be picked very carefully. One example we went through was a vacuum cleaner. If it is trying to maximize the amount of dirt that it is picking up one method of maximizing success on this measurement is to keep dumping the dirt it has collected back on the floor and pick it up again. We start to find that there are quite a number of conditions that we would actually want the vacuum cleaner to maximize and other actions we would like to have minimized.

More to the point of what you are asking imagine a machine that tries to find an optimal path from point A to point B. Such a path could take a vehicle controlled by this algorithm through someone’s living room or over a pedestrian. At the end of the day such paths could be optimal solutions if the casualties are not part of their self evaluation.
 
Wouldn’t a fully realized artificially intelligent machine be a complete sociopath or something?

A machine would have no real emotions but only be able to simulate emotions based on programming. An emotion would just be data. Data is not emotion. Emotions are feelings. Feelings are not data.

I can only see an attempt to create a sociopath. It might appear friendly but it isn’t real. It feels nothing.
I would not classify it that way. Since an AI has no real concern for itself either. It simply executes its programming.
 
Wouldn’t a fully realized artificially intelligent machine be a complete sociopath or something?

A machine would have no real emotions but only be able to simulate emotions based on programming. An emotion would just be data. Data is not emotion. Emotions are feelings. Feelings are not data.

I can only see an attempt to create a sociopath. It might appear friendly but it isn’t real. It feels nothing.
The feeling is the result of process of information in our brains experienced by our minds. We can then decide in a situation using our minds. Our minds then inform our brains so we can act. We can design a machine which can act based on set of (name removed by moderator)uts but that is simply different from us since a machine cannot experience and decide.
 
Take for instance autonomous vehicles. They are simply machines that have some ability to drive your car for you. But, you wouldn’t consider them to be sociopaths even if they mistakenly drove you off a cliff.
 
Wouldn’t a fully realized artificially intelligent machine be a complete sociopath or something?

A machine would have no real emotions but only be able to simulate emotions based on programming. An emotion would just be data. Data is not emotion. Emotions are feelings. Feelings are not data.

I can only see an attempt to create a sociopath. It might appear friendly but it isn’t real. It feels nothing.
Well you could program limitations to the machines capabilities. Instruct it in a moral code.
 
“Open the pod bay doors, HAL.”

“I’m sorry, Dave. I’m afraid I can’t do that.”
Heh. That was a good movie/book. The author was notoriously anti-theist though.
 
Sociopaths don’t mind themselves. To heal them, you must act with love on them.
 
You cannot program empathy into an AI.

The only way to obtain empathy is suffering. AI cannot suffer.
 
You cannot program empathy into an AI.

The only way to obtain empathy is suffering. AI cannot suffer.
Depending on the type of AI it’s behaviour might be more from training than from programming. You can have it try to minimize certain factors when it comes to interacting with people. Unfortunately all the ways in which a person can be harmed are not necessarily known ahead of time so ongoing training may be necessary as new scenarios present themselves.
 
Depending on the type of AI it’s behaviour might be more from training than from programming. You can have it try to minimize certain factors when it comes to interacting with people. Unfortunately all the ways in which a person can be harmed are not necessarily known ahead of time so ongoing training may be necessary as new scenarios present themselves.
There’s a problem: The AI cannot suffer. As a result, there is data that cannot be obtained by ongoing training. That missing and unobtainable (by the AI) data is key to the AI knowing empathy.
 
There’s a problem: The AI cannot suffer. As a result, there is data that cannot be obtained by ongoing training. That missing and unobtainable (by the AI) data is key to the AI knowing empathy.
Yes, I agree. Without real feelings all decisions made by any AI must therefore be without real feelings. If it makes a good choice it is not because it felt something for you. Machines don’t have a “heart”. It would be hard to trust even the smartest machine if it had no “heart”.
 
The notion that an AI can’t be empathetic is incredibly naive. Especially considering the fact that you might just be one.

So here’s an interesting question…would an artificial intelligence realize that it’s an artificial intelligence? Would a computer simulation realize that it’s a computer simulation?

At some point, yes, it probably would. Question: Are we approaching that point?
 
There’s a problem: The AI cannot suffer. As a result, there is data that cannot be obtained by ongoing training. That missing and unobtainable (by the AI) data is key to the AI knowing empathy.
To get into the specifics I think it matters what type of AI one is speaking of and what types of actions that the AI can take. Most AIs are special purpose and applied rather narrowly instead of general and applied to to wide scenarios.

Taking the real world problem of autonomous vehicles the AIs have the ability to recognize other vehicles and other people. To get the AI to not run over people and not run into other vehicles doesn’t require that the AI have an understanding of suffering. The distance to other vehicles and people is quantifiable. Weights can be assigned to the importance of not breaching a certain perimeter around either. If humans can stay in the AIs feedback loop (even if it is as simple as pressing an “I didn’t like that” button or the AI being able to see facial expressions) then there’s some chance of it being able to react to negative feedback once again without having an understanding of suffering. AIs tend to only be given liberty to act within limited domains
 
Is there a difference?
This sounds like it touches on the weak AI/strong AI debate where Strong AI is an AI that is viewed as really thinking/feeling while Weak AI is viewed as an inanimate object that is only acting as through it is thinking but really isn’t. Throughout time there have been criteria defined for what constitutes really thinking. As machines are able to do those things the criteria changes.

AI researchers tend to not care if the AI is viewed as really thinking or only acting like it is thinking as long as it is getting its job done and meeting some acceptance criteria.
 
The notion that an AI can’t be empathetic is incredibly naive. Especially considering the fact that you might just be one.
An AI can only approximate the appearance of sympathy or empathy. An AI will respond objectively according to it’s programming and will never “know” or “feel” the underlying quality or characteristic of an event.
So here’s an interesting question…would an artificial intelligence realize that it’s an artificial intelligence? Would a computer simulation realize that it’s a computer simulation?
Yes, you’d have to include that information with it’s programming, but the AI still won’t “know” or “realize” that it is an artificial intelligence or what it means to be an artificial intelligence; that kind of knowing would require sapience.
 
Status
Not open for further replies.
Back
Top