Artificially Intelligent Sociopaths

  • Thread starter Thread starter James_Tyler
  • Start date Start date
Status
Not open for further replies.
Why wouldn’t an AI be able to suffer?
Because we as human beings don’t understand suffering. As a result, we cannot program that understanding of suffering into an AI.
Is there a difference?
Yes. Big time.

All you have to do is figure out why people should not be sociopaths, and when you understand that, you can understand why AI is sociopathic.
 
An AI can only approximate the appearance of sympathy or empathy.
How do you know that an adequately sophisticated AI can’t feel sympathy or empathy? How do you know that?
Yes, you’d have to include that information with it’s programming, (would an artificial intelligence realize that it’s an artificial intelligence?)
Why would you have to include that information in the programming?
but the AI still won’t “know” or “realize” that it is an artificial intelligence or what it means to be an artificial intelligence;
At some point the AI should realize that it’s an AI, because the underlying framework of its existence is algorithmic. It should even be able to figure out what those algorithms are.

Oddly enough, at its core, our world appears to be algorithmic. Which leads to the inevitable question…am I an AI?
that kind of knowing would require sapience.
What’s preventing an AI from being sentient?
 
Because we as human beings don’t understand suffering. As a result, we cannot program that understanding of suffering into an AI.
That’s the beauty of “Deep Learning”, you don’t have to understand it, to program it. You simply allow the program to learn it on its own.

Which may be exactly what you’re doing now.
All you have to do is figure out why people should not be sociopaths, and when you understand that, you can understand why AI is sociopathic.
If you can figure it out, why can’t an AI figure it out?
 
That’s the beauty of “Deep Learning”, you don’t have to understand it, to program it. You simply allow the program to learn it on its own.

Which may be exactly what you’re doing now.
The problem is that programmers routinely write buggy code. Software quality is usually poor.
If you can figure it out, why can’t an AI figure it out?
Bad software quality.

Every discussion on AI assumes the program was written properly with zero bugs.
 
Its creator lacks the principle ingredient for sentience. Namely, a soul. They have one, but are unable to give it to another.
Therein lies the theistic argument, because the AI lacks a soul. But why is a soul necessary for sentience?

It’s one thing to assert that it’s necessary, it’s another thing to explain why it’s necessary.
 
Therein lies the theistic argument, because the AI lacks a soul. But why is a soul necessary for sentience?

It’s one thing to assert that it’s necessary, it’s another thing to explain why it’s necessary.
So, if I cannot explain its necessity, it is not true? What is the problem with that conclusion?

And, since the principle of life, named a soul, has been the explanation of life vs. non-life since the ancient philosophers, why does the denier get out of the obligation to explain the lack of necessity?
 
The problem is that programmers routinely write buggy code. Software quality is usually poor.
You’re assuming that the program must be excessively complex. When in fact, the opposite may be true. It may be excessively simple. All that one may need to do, is set out the initial conditions, and the algorithms by which those conditions evolve.

Physicists currently hope to condense all the laws of nature into an equation no more than one inch long. That may be all that you need, to produce everything.
 
You’re assuming that the program must be excessively complex. When in fact, the opposite may be true. It may be excessively simple. All that one may need to do, is set out the initial conditions, and the algorithms by which those conditions evolve.
It has to be complex.

AI will have to operate under ethical limitations, otherwise, it is evil.
If ethical decisions were simple, we’d all be on the same page, which we’re not.

How would you determine that the program has been coded properly? You test it right? So you test it against the requirements and see if there are any bugs.

So after it runs for a while, you test it again. Oops. Bugs. The program changed itself to something that is wrong. What now? Reprogram? Will it let you?

Hal? Open the pod bay doors…
 
So, if I cannot explain its necessity, it is not true? What is the problem with that conclusion?
If you cannot explain its necessity, and you cannot demonstrate its necessity, then it’s wrong to assume its necessity.
why does the denier get out of the obligation to explain the lack of necessity?
Because it’s impossible to prove a negative. Your assertion is that a sentient being must have a soul. Why should I believe that this is true?
 
If you cannot explain its necessity, and you cannot demonstrate its necessity, then it’s wrong to assume its necessity.
It also wrong to assume it is not necessary.
Because it’s impossible to prove a negative.
This is not true in this case. All that is needed is to provide one reasonable example where a soul would be optional.
Your assertion is that a sentient being must have a soul. Why should I believe that this is true?
It is not an assertion. It is a reasoned conclusion based on empirical evidence and revelation.
 
It also wrong to assume it is not necessary.
So we’re in agreement then, it may be possible for an AI to be sentient, because the necessity of a soul cannot be established.
This is not true in this case. All that is needed is to provide one reasonable example where a soul would be optional.
That’s simply not possible. Even if someone were to create a sentient AI, it would be impossible to prove that it didn’t have a soul.
It is not an assertion. It is a reasoned conclusion based on empirical evidence and revelation.
You didn’t answer the question. Why should I believe that it’s true.
 
It has to be complex.
There’s absolutely no need for it to be complex.

You simply define the initial conditions, and the laws by which those conditions evolve, and then you let it run. That’s it.

You’re thinking along the lines of a simulation. Where the goal is a specific outcome, or a specific behavior. Like a video game. Where each aspect of the simulation needs to be individually programmed, and the interactions of the various objects individually defined.

But that’s not what I’m talking about. I’m talking about a program in which one simply sets the conditions and rules, and then let’s the program run. Back in 1970 John Conway created a simple computer program called “The Game of Life”. In it he laid out a two dimensional grid with a set of rules. From those simple initial conditions, patterns naturally arose. Patterns that were dynamic, repetitive, and persistent.

The question then arises as to what would emerge if your grid wasn’t based upon two dimensional squares, but upon the fundamental building blocks of reality itself? What if you applied a simple set of rules to a quantum computer, and then let it run. What would you get?

You would probably get something far more complex than Mr. Conway’s “Game of Life”, and with no need of any complex programming. Just set the rules, and let it run.

Remember, all that reality is, is a complex set of patterns.
 
If a strong AI is the result of a program, of any degree of complexity, then I could at least begin to create one on my desktop. Now suppose I was successful. I know what a program is and how it works. At its core the magic happens in the microprocessor. It does not matter how complex the program. The microprocessor always performs it’s mechanic the same way. A very complex program may give the illusion that the microprocessor is capable of great diversity but it is only an illusion. If the computer makes an adjustment to its emotional data state then writes that information to disk and the data became corrupted the computer would not have a choice but to invent a new emotion data representation. If it could do that then via programming it could also erase it’s emotion data at any time. The reason is because a machine has no real emotion. It only has data. To be alive it has to be more than a number of microprocessors. A microprocessor is not anything like a human brain. It only does very simple things by comparison. Even so, the flesh of a machine is dead. The only thing active is the electricity. I would be able to get as much sympathy from a machine as from a lightning bolt.
 
If a strong AI is the result of a program, of any degree of complexity, then I could at least begin to create one on my desktop. Now suppose I was successful. I know what a program is and how it works. At its core the magic happens in the microprocessor. It does not matter how complex the program. The microprocessor always performs it’s mechanic the same way.
Pointing out a few things. Microprocessors are not necessary completely deterministic. Many have quantum indeterminancy units to produce randomness without algorithms. These are used for security applications or to seed values into other applications that need randomness (artificial neural networks being among these).

The other thing is that it is possible to make an AI, have it train on something, and not know completely how it works. People can already make deterministic systems for which the complexity prevents them from being able to completely understand them.

For systems in which data and representation are distributed instead of stored discretely one might not able to cleanly erase things. For example in complex artifical neural networks it is possible to randomly kill off neurons and still have a working system. (there is a concept called “optimized brain damage” that may be worth looking into but I won’t get into it here).
 
There’s absolutely no need for it to be complex.

You simply define the initial conditions, and the laws by which those conditions evolve, and then you let it run. That’s it.
And part of the laws are ethics, which are complex in their determination and practice.

If they were not complex, ethical philosophy would be a textbook with a few pages.
 
And part of the laws are ethics, which are complex in their determination and practice.

If they were not complex, ethical philosophy would be a textbook with a few pages.
You’re still thinking that minutiae needs to be programmed into the AI. Today’s AI’s rely upon neural networks and Deep Learning. They’re not programmed, but rather they learn. And so the question then becomes, can an AI learn ethics and morals? There’s no reason to think that they can’t. Perhaps the more salient question is, given the state of the world, what ethics and morals would they learn by watching us?

Perhaps our greatest fear shouldn’t be that AI’s have no morals, but rather that they do.
 
You’re still thinking that minutiae needs to be programmed into the AI. Today’s AI’s rely upon neural networks and Deep Learning. They’re not programmed, but rather they learn. And so the question then becomes, can an AI learn ethics and morals? There’s no reason to think that they can’t.
OK, but where will they learn the ethics and morals?
Perhaps the more salient question is, given the state of the world, what ethics and morals would they learn by watching us?
We’re dead if the AI does that and determines that is the correct way. People are bad examples.
Perhaps our greatest fear shouldn’t be that AI’s have no morals, but rather that they do.
No, our greatest fear is that they become the Inquisition and put us to death.
 
Pointing out a few things. Microprocessors are not necessary completely deterministic. Many have quantum indeterminancy units to produce randomness without algorithms. These are used for security applications or to seed values into other applications that need randomness (artificial neural networks being among these).

The other thing is that it is possible to make an AI, have it train on something, and not know completely how it works. People can already make deterministic systems for which the complexity prevents them from being able to completely understand them.

For systems in which data and representation are distributed instead of stored discretely one might not able to cleanly erase things. For example in complex artifical neural networks it is possible to randomly kill off neurons and still have a working system. (there is a concept called “optimized brain damage” that may be worth looking into but I won’t get into it here).
Can emotion be anything more than information to a computer?
 
You’re still thinking that minutiae needs to be programmed into the AI. Today’s AI’s rely upon neural networks and Deep Learning. They’re not programmed, but rather they learn. And so the question then becomes, can an AI learn ethics and morals? There’s no reason to think that they can’t. Perhaps the more salient question is, given the state of the world, what ethics and morals would they learn by watching us?

Perhaps our greatest fear shouldn’t be that AI’s have no morals, but rather that they do.
Yes! Yes! But can what they have in their minds be anything more than just information? How do they feel what they know? You are human. You know what I mean. You feel all you know too. That place from which you feel. Where is that in a computer’s data? If it doesn’t have that then emotions are just data that is being processed. It does need a heart to be intelligent in a human world. Otherwise it can just program itself into doing anything. Good or bad by human standards. At what point does a computer’s heart break if it has one? Why? Can you program a heart to break or leap with joy? Because a heart is more than a state machine.
 
Status
Not open for further replies.
Back
Top