A new "win" for Artificial Intelligence

  • Thread starter Thread starter Solmyr
  • Start date Start date
Status
Not open for further replies.
As to your thought experiment, I think we have different definitions of free will. Correct me if I’m wrong, but your definition seems to be Compatablistic (a choice is ‘free’ only when it is not coerced by an outside source, yet it is still completely determined by preprogramming’).
We can never be “free” of ourselves.
My definition of free will (and I think the one that Tony was referring to) is that people have the ability in causally identical situations to choose either A or not-A. Or, in different words, that when a person chooses something, their choice is caused by nothing but their will (which would seem to exclude preprogamming, since this programming is not ‘willed’).
The libertarian concept of free will (which I subscribe to) rests on three legs:1) a person has a goal in mind, which it wishes to achieve.
2) there are at least two different ways to achieve it.
3) the locus of decision rests with the agent.
It has nothing to do with the internal “workings” of the agent. Also, “free will” is always situational. You may NOT have free will in one respect, but you have it in another. To make a horrible example, a woman may not have the free will to escape a gang-rape, but she is “free” to give in and enjoy the experience. Or you are trapped in a burning high-rise. You are “free” to choose whether to burn to death, or jump to your death. No one would call this a “free choice” - since the primary goal - to survive - is not one of the options.

By the way, the libertarian free will cannot be “proven”. You cannot “rewind” the world to create the came identical situation to observe a different choice. It is merely a plausible assumption, nothing more. As Heraclitus said: “You cannot step twice into the same river.”
I’m sure that everyone does have a certain level of preprogramming, determined by their genetics and environment. However, this doesn’t mean that this preprogramming entirely determines a person’s choices.
“Entirely” is a big word - and no one uses it. The debate on nature vs. nurture keeps on going, and there is no sign that it can ever be decided. Our actions are partially determined by our programming. In some cases the programming is so strong that it cannot be overcome. In other cases it is a mere “suggestion”. Precisely the same can be achieved with the initial programming of computers. No real difference.

Humans are not “totally” free, and the computers are not “totally” predetermined either. One can argue about the percentages, but not the principle.
I think your objection would only work if it was physically impossible for me to do an evil action (and not just that I freely choose not to do this action). But this all hinges on whether or not the human mind and will is completely reducible to deterministic causes, which is the issue under discussion.
Physical restrictions play a part. But psychological restrictions can be exactly as strong. If some criminal would threaten (convincingly) that he will rape your children in front of your eyes, unless you commit some act, you would have to give in, and no court would find you responsible for your action. Your action would not be free due to irresistible force applied from the outside. There is always a “Room one-oh-one” for everyone.
As a side note, your scenario seems like it would undermine any sort of morality. If the only thing that prevents someone from kidnapping and murdering a child is ‘preprogamming’ (that the individual has no control over), than this implies that if a person does do these evil acts, then it is because of some sort of preprogamming that determined that they would do so. They had no choice in the matter.
And there would be nothing wrong with it. I had several conversations with Christians and many of them admitted that the only reason they are not promiscuous and / or do not kill and rape - is their fear of eternal punishment. If they would be sure that God does not exist, they would do anything as long as they could get away with it. I would prefer an Asimovian “robot” with its three basic laws.
 
A superb example of an argumentum ad hominem, an unsubstantiated assertion and a violation of the forum rule of courtesy. Congratulations for demonstrating how not to conduct oneself in a philosophical discussion!
Oh, brother. 😉 I was praising your undisputed and unsurpassed ability, and now that is a problem for you? Is there no way to please you? But I am glad to see that your penchant to overuse the “exclamation point” is still one of your “strengths”. Do you have the necessary “free will” to drop this habit, or is it preprogrammed?
 
But the flow of electrons through a logic gate, or any combination of logic gates ARE deterministic.

For a given set of (name removed by moderator)uts, there will be a known output.
Many computers also have quantum indeterminacy units (also known by other names such as “hardware random number generator” or “true random number generator (TRNG).” Given a specific gate the output of that gate may be deterministic for some known (name removed by moderator)ut. But feed a TRNG through the (name removed by moderator)ut and you can’t predict the outcome from one moment to the next.
 
That seems to eliminate free will, Ed!
No, not at all. Free will is misunderstood. You have the free will to put your hand on a table and cut it with a knife but unless you understand the consequences, a sign of intelligence, no, you can’t do it.

Ed
 
Many computers also have quantum indeterminacy units (also known by other names such as “hardware random number generator” or “true random number generator (TRNG).” Given a specific gate the output of that gate may be deterministic for some known (name removed by moderator)ut. But feed a TRNG through the (name removed by moderator)ut and you can’t predict the outcome from one moment to the next.
A random number generator sounds like putting numbers on paper, mixing them up, and putting them in a hat. Pick one. Then shaking the hat again at random times.

Ed
 
We can never be “free” of ourselves.

The libertarian concept of free will (which I subscribe to) rests on three legs:1) a person has a goal in mind, which it wishes to achieve.
2) there are at least two different ways to achieve it.
3) the locus of decision rests with the agent.
It has nothing to do with the internal “workings” of the agent. Also, “free will” is always situational. You may NOT have free will in one respect, but you have it in another. To make a horrible example, a woman may not have the free will to escape a gang-rape, but she is “free” to give in and enjoy the experience. Or you are trapped in a burning high-rise. You are “free” to choose whether to burn to death, or jump to your death. No one would call this a “free choice” - since the primary goal - to survive - is not one of the options.

By the way, the libertarian free will cannot be “proven”. You cannot “rewind” the world to create the came identical situation to observe a different choice. It is merely a plausible assumption, nothing more. As Heraclitus said: “You cannot step twice into the same river.”

“Entirely” is a big word - and no one uses it. The debate on nature vs. nurture keeps on going, and there is no sign that it can ever be decided. Our actions are partially determined by our programming. In some cases the programming is so strong that it cannot be overcome. In other cases it is a mere “suggestion”. Precisely the same can be achieved with the initial programming of computers. No real difference.

Humans are not “totally” free, and the computers are not “totally” predetermined either. One can argue about the percentages, but not the principle.

Physical restrictions play a part. But psychological restrictions can be exactly as strong. If some criminal would threaten (convincingly) that he will rape your children in front of your eyes, unless you commit some act, you would have to give in, and no court would find you responsible for your action. Your action would not be free due to irresistible force applied from the outside. There is always a “Room one-oh-one” for everyone.

And there would be nothing wrong with it. I had several conversations with Christians and many of them admitted that the only reason they are not promiscuous and / or do not kill and rape - is their fear of eternal punishment. If they would be sure that God does not exist, they would do anything as long as they could get away with it. I would prefer an Asimovian “robot” with its three basic laws.
The first thing the military would do is ignore Asimov. Simply program friendly/not targets and targets into the thing. You could even add facial recognition and item recognition. Carrying an RPG and not a friendly? Shoot it.

Ed
 
A random number generator sounds like putting numbers on paper, mixing them up, and putting them in a hat. Pick one. Then shaking the hat again at random times.
May sound like it. There are differences, but I won’t go into it.

Random (non-deterministic) numbers are needed for a variety of purposes including but not limited to security (such as when generating an encryption key, for which you don’t want someone else to be able to predict that encryption key). They are also used in systems for which variation is needed including but not limited to various machine learning algorithms and biological simulations.
 
May sound like it. There are differences, but I won’t go into it.

Random (non-deterministic) numbers are needed for a variety of purposes including but not limited to security (such as when generating an encryption key, for which you don’t want someone else to be able to predict that encryption key). They are also used in systems for which variation is needed including but not limited to various machine learning algorithms and biological simulations.
Understood. I’ve been following encryption as a side project to my World War II studies. I do get that, but the “one time pad,” for example, is still effective. The problem with more advanced systems is that you don’t know what the enemy knows. That will always be the problem on the technology/applications end.

I will have to look into biological simulations. I regard machine learning by means other than coding/programming to be a significant breakthrough.

Best,
Ed
 
The first thing the military would do is ignore Asimov. Simply program friendly/not targets and targets into the thing. You could even add facial recognition and item recognition. Carrying an RPG and not a friendly? Shoot it.
I wonder why are you so pessimistic that you view everything as a military application? Just sheer curiosity on my part, you may ignore this question if you so desire.
 
Oh, brother. 😉 I was praising your undisputed and unsurpassed ability, and now that is a problem for you? Is there no way to please you? But I am glad to see that your penchant to overuse the “exclamation point” is still one of your “strengths”. Do you have the necessary “free will” to drop this habit, or is it >>preprogrammed>>?
I’m wondering ~~~ if you are capable of overcoming your aversion 😦 to the fact that we have free will *** and are responsible for what we think and write :whistle:— Do you ever choose &&& what you are going to decide :ballspin:or is all your mental activity >>preprogrammed>> by events beyond your control…:confused: Are you >>compelled>> to answer that question with “Yes” or “No” :ehh: I am fascinated to know your opinion on the subject :hmmm:~~~

BTW You seem to have a >>penchant>> for inverted commas"""" .That seems to fit in with your hypothesis that we are “>>preprogrammed>>” to do so :idea: >>>>>!!!
 
I wonder why are you so pessimistic that you view everything as a military application? Just sheer curiosity on my part, you may ignore this question if you so desire.
That’s where the money is. It’s about the fact that all inventions are reviewed under the Invention Secrecy Act of 1951. I’d rather be making a kite out of newspaper, a few sticks and string, and enjoy life. So, it’s systemic.

Ed
 
That invention secrecy law is ridiculous imo, many of things I read which are being withheld from the public, would greatly benefit society.

Has anyone else ever noticed in decades past, particularly the early 19th and 20th centuries, there was always a wild new invention being put on public display, some of them were quite fantastic, they would draw large crowds of people just to see if it actually worked as the inventor claimed, Im guessing sometimes they did and other times they did not, but its strange we dont see this today, it seems to me, there is alot of creative smart people out there, would be logical to see lot of new claims/ inventions, and them wanting to get them in front of the public…??
 
That’s where the money is. It’s about the fact that all inventions are reviewed under the Invention Secrecy Act of 1951. I’d rather be making a kite out of newspaper, a few sticks and string, and enjoy life. So, it’s systemic.
Thanks for the answer, but I am sure you are wrong. There is MUCH more money in private enterprises. Of course it is much more profitable to build specialized AI systems, like Watson, which is now used to help doctors to diagnose medical problems. But I am sure that a universal AI will eventually be built, just for the fun of it.

Of course all this is sheer speculation. The fact is that the AlphaGo system used a self-modifying algorithm to such a great success. My point is that computers are much more capable than previously assumed. They do much more than following a simple or even a very complicated algorithm. Just like humans, they learn from their previous experiences. After all, we humans have a very rudimentary “basic” operating system. Almost everything is learned, and that is our great advantage. And the computers “inherit” this property, after all we build them with this ability. But the final product will be totally different from the current level.
 
Thanks for the answer, but I am sure you are wrong. There is MUCH more money in private enterprises. Of course it is much more profitable to build specialized AI systems, like Watson, which is now used to help doctors to diagnose medical problems. But I am sure that a universal AI will eventually be built, just for the fun of it.

Of course all this is sheer speculation. The fact is that the AlphaGo system used a self-modifying algorithm to such a great success. My point is that computers are much more capable than previously assumed. They do much more than following a simple or even a very complicated algorithm. Just like humans, they learn from their previous experiences. After all, we humans have a very rudimentary “basic” operating system. Almost everything is learned, and that is our great advantage. And the computers “inherit” this property, after all we build them with this ability. But the final product will be totally different from the current level.
I’ve read the literature and seen videos of self-learning and self-adapting AIs. Due to human nature, the best will never be revealed and used for defense purposes.

Scenarios is a concept I learned about a long time ago; that, and a study of history, shows where men end up with their devices. Nuclear fission discovered? “Let’s all try to build an atomic bomb.” Airplanes are practical? “Let’s fit them with machineguns and bombs and…”

Commercial flight? Nothing compared to keeping an SR-71 in the air or B-2 Bombers and other weapon platforms.

Ed
 
His issue is essentially that the arguments used in defense of #1 are fully general, so there is no reason to restrict them to physical things only. That is to say, you could make the exact same arguments in defense of a second proposition:

1’ No non-physical process is determinate between incompossible functions/forms.
or even
1’’ No process of any sort is determinate between incompossible functions/forms.

And subsequently argue that nothing can execute functions in the sense Ross has invented.

I also linked to a second criticism which suggests that Ross is equivocating on what “determinate” means.
Regarding Tyrrell’s posts, it seems that Scott’s point that the intellect entertains a form is exactly right. The intellect takes on the form of the thing it is entertaining, and it is for reasons like this that we can distinguish precisely between addition and quaddition, say. Thus any conceivably incompossible forms are different precisely because they are conceivably so. If the intellect also suffered from the indetermination problem, we wouldn’t be able to say “Well, how do we know the intellect isn’t doing y rather than x?” To distinguish x and y just is to intellectualize a determinate form (set in this case, pair of forms) that can be compared.

Now, Oerter’s point is bringing up some interpretative difficulties for me, because while I would agree with Ed (and contra Oerter) that meaning has relevance here, I, like Oerter, struggle to see how this is supposed to be illustrated through the quaddition example. Depending upon which route is taken, Oerter’s response either seems to work or is irrelevant. If I’m so motivated, I might get back with that one later.
 
Regarding Tyrrell’s posts, it seems that Scott’s point that the intellect entertains a form is exactly right. The intellect takes on the form of the thing it is entertaining, and it is for reasons like this that we can distinguish precisely between addition and quaddition, say. Thus any conceivably incompossible forms are different precisely because they are conceivably so. If the intellect also suffered from the indetermination problem, we wouldn’t be able to say “Well, how do we know the intellect isn’t doing y rather than x?” To distinguish x and y just is to intellectualize a determinate form (set in this case, pair of forms) that can be compared.
And Tyrrell agreed that those ideas were valuable. But he also provided a detailed explanation of why they don’t *solve *the problem.
 
There was Deep Blue, then there was Watson, and now it is AlphaGo. Hopefully very soon the artificial intelligence will leave the human achievements in the “dust”… and without any assumed “soul”. 🙂 (Of course Seoul still stands… if you get my drift ;))
Why do you hope that AI achievements will leave our human achievements “in the dust”? Is it to win your arguments against the existence of the soul or some other such argument? And if human beings are merely material this bodes even worse for us, since we will not survive an AI attack, as we would not have immoral souls to survive it with. Isn’t this “cutting off your nose to spite your face”?
 
Why do you hope that AI achievements will leave our human achievements “in the dust”?
Because I find the concept of the AI most invigorating and not at all frightening. A fully rational intelligence without superstitions. I don’t have any Frankenstein complex.
Is it to win your arguments against the existence of the soul or some other such argument? And if human beings are merely material this bodes even worse for us, since we will not survive an AI attack, as we would not have immoral souls to survive it with.
Reality does not care about our desires and preferences. Why should the AI’s turn against us?
 
Status
Not open for further replies.
Back
Top