Should robots be entitled to the same rights as humans?

  • Thread starter Thread starter YosefYosep
  • Start date Start date
Status
Not open for further replies.
A good direction is started by Pallas Athene that if we perceive a human like quality in a machine that it should be treated more human like. That extending rights to actors of human reactions is not extended for the sake of the machine, but for the sake of those who see them as human like.

The toddler or mentally handicapped may not have the understanding of nuance that machines are not people when they look like people. We should build a culture that is so sensitive that any such as these will not be witness to any human like creature being mistreated or even in the slightest way treated differently.

To prevent this I believe there needs to be legal control of how human like these robots should be. All the advancement of AI can come in the form of a box. If tool and clean up manipulative performance is necessary give them that, but there really is no need to make them human like in appearance and quite a lot of other problems start as they are sexualized.
 
I am currently in an online argument. My opponent is arguing that if robots could reach an intelligence level, say, of C3PO from Star Wars, then they should be considered intelligent and rational beings endowed with the same rights as us. Any arguments I might want to consider in my rebuttal?
There was an episode of Batman the Animated Series that dealt with this. I’m not sure what to say it’s just the only thing that comes to mind.
 
This is a very interesting question.

Suppose that the robot is an android (a humanoid robot) which exhibits the “usual” signs of “pain”. When you cut it, it bleeds (artificial blood maybe, but you are not aware of it), when you beat it, it will leave a visible mark. And the process just gets refined… at which point would you accept that the “simulation” turned into “reality”?

By the same token if that assumed android is actually a very talented actor (unbeknownst to you), who “simulates” pain very convincingly… would you stop beating him and accept that he is actually in pain, and not just simulating it? (There are some humans who do NOT feel pain… a very dangerous attribute)

In other words: “what separates the emulation from the real McCoy”? Or putting it differently: “if it looks like a duck, walks like a duck, quacks like a duck, tastes like a duck… on what grounds would you declare: this is not a real duck, just an emulation of a real duck”?
The difference would be that if despite appearances you know it is not a duck. If you don’t know if it is human or not you should treat it as if it were, because it could be. This is btw why abortion is still wrong even if you don’t know if it is human or not. However, a robot could never actually be human no matter how human like it is. It could not be human unless it actually is human, born of a woman, as it were. This is just common sense.

We feel pain because it helps us to stay alive. But a robot was never alive to begin with. Things that are not alive do not feel pain. They are simply mechanistic machines. They may have some value for what they are. But they do not have souls. They do not have the breath of life.
 
The difference would be that if despite appearances you know it is not a duck.
The question is: “how do you know that it is NOT a duck”? It is not obvious. You can only go with the appearances. When you see the actor exhibiting signs of pain, do you accept that he is in pain?
 
The question is: “how do you know that it is NOT a duck”? It is not obvious. You can only go with the appearances. When you see the actor exhibiting signs of pain, do you accept that he is in pain?
I already said if we don’t know if it is human or not then treat it as human. But what situation are we in where we own a robot and we don’t know if it is human or not? Certainly, you won’t find me beating robots or trying to make them artificially squeal in pain. Unless it were self defense. If I knew it was a robot then no I would not think it was actually in pain,but I would not enjoying such facsimiles of pain either. I saw a documentary one time that said people generally think of robots designed to look like humans as creepy, and prefer robots to look somewhat different anyways. If you have something artificial that looks like you but is different in essence it strikes you as odd and strange. Animated mannequins are still mannequins. A robot in a real sense is still a puppet manipulated by human programmer puppeters. Why do people find the ventriloquist’s puppet to be creepy?. I don’t think humans would ever accept robots as one of them.
 
The question is: “how do you know that it is NOT a duck”? It is not obvious. You can only go with the appearances. When you see the actor exhibiting signs of pain, do you accept that he is in pain?
So if it walks like a human and talks like a human then it’s a human even though it’s really a robot? Something sounds extremely wrong with this scenario. But I guess if you only judge a book by its cover, this is sound reasoning. :cool:
 
I am currently in an online argument. My opponent is arguing that if robots could reach an intelligence level, say, of C3PO from Star Wars, then they should be considered intelligent and rational beings endowed with the same rights as us. Any arguments I might want to consider in my rebuttal?
I’d look at a few assumptions that are being considered:
  1. C3PO exhibits just about every sentient trait.
  2. His construction is more than the logic programs we are failure with ( not just 1’s & 0’s), but some sort of artificial neural simulation of a human brain.
  3. 3CPO and R2D2 through different outside interference such as a “bolt” have shown to be dangerously reprogram-able or otherwise out of their own control.
With these sorts of assumptions I don’t see any argument against saying such a being should be considered a sentient being. A sentient being should have rights. With different needs from a human any non-human sentient being would have somewhat different rights than a human, but they could be very equivalent.

Then there is the matter of trust to move freely within society. Having not been nurtured and raised like a human and subject to external reprogramming it may not be assumed to be equivalently trustworthy as a human in that it could at any time become very destructive. Yes, humans we know can be destructive too, but as a society we have worked to allow the correct amount of freedom to accommodate humans. I don’t think our society need accept a whole new sets of unknown risks.

I would say that yes they should have about the same standing as Guantanamo Bay inmates. Allowed to exist, but placed in a secure condition apart from society.
 
I’d look at a few assumptions that are being considered:
  1. C3PO exhibits just about every sentient trait.
  2. His construction is more than the logic programs we are failure with ( not just 1’s & 0’s), but some sort of artificial neural simulation of a human brain.
  3. 3CPO and R2D2 through different outside interference such as a “bolt” have shown to be dangerously reprogram-able or otherwise out of their own control.
With these sorts of assumptions I don’t see any argument against saying such a being should be considered a sentient being. A sentient being should have rights. With different needs from a human any non-human sentient being would have somewhat different rights than a human, but they could be very equivalent.

Then there is the matter of trust to move freely within society. Having not been nurtured and raised like a human and subject to external reprogramming it may not be assumed to be equivalently trustworthy as a human in that it could at any time become very destructive. Yes, humans we know can be destructive too, but as a society we have worked to allow the correct amount of freedom to accommodate humans. I don’t think our society need accept a whole new sets of unknown risks.

I would say that yes they should have about the same standing as Guantanamo Bay inmates. Allowed to exist, but placed in a secure condition apart from society.
Hah. This is kind of amusing but also makes some sense. After reading that it gives images of releasing an unknown entity into society, where we don’t know if it will shake our hand or cut it off. A machine that can think, but we just don’t know what it is thinking. This to me is the real reason making a true a.I. robot makes little sense. What would be the point of that? We can already create beings that have intelligence, we call them our children.

The artificial beings who talk and act like us we can envision as being created by some lonely mad scientist. Like Frankenstein. But we can not envision them as being created by a loving family.

Even Dr. Soong, the supposed creator of Commander Data on Star Trek, turns out was a crazy old kook. For in one episode we find out after his wife dies he creates a robot of her so he can continue on I guess as if nothing ever happened. I guess its kind of like replacing your wife with an intelligent toaster oven. I suppose some people could see the logic in that.😃
 
I’d look at a few assumptions that are being considered:
  1. C3PO exhibits just about every sentient trait.
  2. His construction is more than the logic programs we are failure with ( not just 1’s & 0’s), but some sort of artificial neural simulation of a human brain.
  3. 3CPO and R2D2 through different outside interference such as a “bolt” have shown to be dangerously reprogram-able or otherwise out of their own control.
With these sorts of assumptions I don’t see any argument against saying such a being should be considered a sentient being. A sentient being should have rights. With different needs from a human any non-human sentient being would have somewhat different rights than a human, but they could be very equivalent.

Then there is the matter of trust to move freely within society. Having not been nurtured and raised like a human and subject to external reprogramming it may not be assumed to be equivalently trustworthy as a human in that it could at any time become very destructive. Yes, humans we know can be destructive too, but as a society we have worked to allow the correct amount of freedom to accommodate humans. I don’t think our society need accept a whole new sets of unknown risks.

I would say that yes they should have about the same standing as Guantanamo Bay inmates. Allowed to exist, but placed in a secure condition apart from society.
I hope this is a joke.:eek:
 
I hope this is a joke.:eek:
Not at all. It’s all makes very good sense if you allow that any such detention center is humane.

I’m also supporting my earlier post that things like these should not be legal to create.

If your thinking that Guantanamo Bay is an inhumane place of torture then that is not my meaning at all.
 
Hah. This is kind of amusing but also makes some sense. After reading that it gives images of releasing an unknown entity into society, where we don’t know if it will shake our hand or cut it off. A machine that can think, but we just don’t know what it is thinking. This to me is the real reason making a true a.I. robot makes little sense. What would be the point of that? We can already create beings that have intelligence, we call them our children.

The artificial beings who talk and act like us we can envision as being created by some lonely mad scientist. Like Frankenstein. But we can not envision them as being created by a loving family.

Even Dr. Soong, the supposed creator of Commander Data on Star Trek, turns out was a crazy old kook. For in one episode we find out after his wife dies he creates a robot of her so he can continue on I guess as if nothing ever happened. I guess its kind of like replacing your wife with an intelligent toaster oven. I suppose some people could see the logic in that.😃
Yes, I think you’ve captured the idea of why sentient like beings are not to just be set free without knowing all the risks. Now is there a way that we could prove that any such robot could be counted on to become part of society or at least released under the control of a human citizen? I think this is almost as difficult a feat as creating one in the first place.
 
Yes, I think you’ve captured the idea of why sentient like beings are not to just be set free without knowing all the risks. Now is there a way that we could prove that any such robot could be counted on to become part of society or at least released under the control of a human citizen? I think this is almost as difficult a feat as creating one in the first place.
I don’t think there is anyway to do that and it be considered truly autonomous. What you are talking about is having some control over it. But the purpose of a true ai is to be autonomous and to think for itself. It can think for itself and can enjoy a cup of tea for its own sake. And not because it was programmed to like it.

In order to have control over it you would have to program it to behave a certain way,and in such a way as that it could not override its programming. This leads to the first rule of robot ai that is taught in computer science, to do no harm to humans. But of course enforcing such rules means they are not truly autonomous,at least to the degree we are talking about here where we are considering giving them rights as humans would have.

You see we want our robots to be intelligent enough to do our laundry,but not so intelligent that they start a revolt.

The whole point of having robots is that they are not alive. We don’t feel bad for them when they slavishly build our cars. They don’t feel pain or get depressed. They are machines. If we want to talk to someone or have a relationship we don’t build a robot we make a friend or a family.
 
Well, I agree that we don’t want robots that are there just for their own amusement.

I was thinking that just as a human is remanded to the custody of another that an AI that was well tested as a good benefit to society might also; not that the person with that responsibility had total control. Yet, in all I think this is really a stretch, maybe more than actually developing such a robot.

I agree the target is freeing man not freeing machines and we should carefully restrict the development of such AI; thankfully we are not very close that we really need such laws yet.
 
I already said if we don’t know if it is human or not then treat it as human.
Yes, we already agreed about this. 🙂 The question is “HOW” do we know if that being IS a human or not? And that brings up the fundamental question: “just WHAT is a human”? And why is to be a “human” so special? The movie by Robin Williams: “Bicentennial man” comes to mind.
But what situation are we in where we own a robot and we don’t know if it is human or not?
We don’t talk about ownership. Let’s just say that you encounter a “someone” or a “something”, which acts just like a human. Maybe you are curious if that being is a robot or not. The difference could be important: if it is a human, you will accept it as one; if not, you will consider it as a “pseudo-human”. As you said you would not torture it… but that is far cry from accepting it.

What options do you have? Can’t take a scalpel and cut it up… since it MIGHT be a real human, and then you could only say: “oops… sorry about that”? Maybe you take an X-ray, and see some metallic parts… but he might have had some transplants. The building blocks are not relevant… a “human” can have artificial organs or limbs.

The only surefire way is to examine how that entity behaves.
If you have something artificial that looks like you but is different in essence it strikes you as odd and strange.
Ah, the good, old “essence”. What is it? I would really like to have an explanation about the “essence” of a human being. I am aware of the generic definition: “something that makes something to be what it is”. But this needs to be translated into particulars.
I don’t think humans would ever accept robots as one of them.
If I could ever meet R. Daneel Olivaw, I would accept him (or "it) as my brother in the family of sentient, intelligent beings.
This leads to the first rule of robot ai that is taught in computer science, to do no harm to humans.
You got that right! Wouldn’t it be nice if humans had the same built-in “rule” in them?
The whole point of having robots is that they are not alive.
Ah, the “alive” argument. A mosquito is “alive”, and how do we treat it? A healthy “smack” come to mind. Why is the “biologically alive” is considered important and the “intellectually alive” is shrugged off?
 
Do they get paid for their labor? Following your logic, we have enslaved them already.

However, according to my logic, one can’t enslave robots. They aren’t living creatures.
I agree with you, Christine. I was making the point that robots are not alive and don’t have minds.
 
Hey, I *like *cats!

[SIGN]Cats rule, dogs drool [/SIGN] 😉
I like cats the best too, but dogs are sweet creatures. But both cats and dogs have there own natures. Unlike robot “nature”. Since robots are not natural.
 
Yes, we already agreed about this. 🙂 The question is “HOW” do we know if that being IS a human or not? And that brings up the fundamental question: “just WHAT is a human”? And why is to be a “human” so special? The movie by Robin Williams: “Bicentennial man” comes to mind.

We don’t talk about ownership. Let’s just say that you encounter a “someone” or a “something”, which acts just like a human. Maybe you are curious if that being is a robot or not. The difference could be important: if it is a human, you will accept it as one; if not, you will consider it as a “pseudo-human”. As you said you would not torture it… but that is far cry from accepting it.

What options do you have? Can’t take a scalpel and cut it up… since it MIGHT be a real human, and then you could only say: “oops… sorry about that”? Maybe you take an X-ray, and see some metallic parts… but he might have had some transplants. The building blocks are not relevant… a “human” can have artificial organs or limbs.

The only surefire way is to examine how that entity behaves.

Ah, the good, old “essence”. What is it? I would really like to have an explanation about the “essence” of a human being. I am aware of the generic definition: “something that makes something to be what it is”. But this needs to be translated into particulars.

If I could ever meet R. Daneel Olivaw, I would accept him (or "it) as my brother in the family of sentient, intelligent beings.

You got that right! Wouldn’t it be nice if humans had the same built-in “rule” in them?

Ah, the “alive” argument. A mosquito is “alive”, and how do we treat it? A healthy “smack” come to mind. Why is the “biologically alive” is considered important and the “intellectually alive” is shrugged off?
I do not share your materialistic reductionist assumptions. Everything is not reduced to matter. Do you consider a computer program to be alive? What do you mean by intelligently alive?
 
With true A.I being darn near impossible for us to create using our intelligent minds, how many people actually believe that our own minds were the product of chance? If we can’t do it with reasoning how could it happen on its own without an intelligent mind behind it?

If its true what materialists believe that we can get a whole heck of a lot for nothing, the universe, our minds, love, etc, then I think I will quit my job and figure out how this works so I can take advantage of it. Here I thought I had to work for living when I could just be getting something from nothing.😃
 
Status
Not open for further replies.
Back
Top