A new "win" for Artificial Intelligence

  • Thread starter Thread starter Solmyr
  • Start date Start date
Status
Not open for further replies.
S

Solmyr

Guest
wired.com/2016/03/googles-ai-wins-first-game-historic-match-go-champion/

And not just in one aspect. That the computer won against one of the world’s best GO players is already very exciting news. But the real important part is that the computer educated ITSELF to improve its own game. (By the way, the commentators referred the computer as “he”… ;))

One of the (incorrect) criticisms against AI is that the computer just does what it has been “told”. Well, in this case it went far beyond the base programming. Just like a child, who acts on the instruction of his father: “go and learn, educate yourself”. The credit goes to the child (in this case the computer) and not to the father (or the programmers). As a matter of fact, the programmers don’t even know how the self-development procedure happens.

There was Deep Blue, then there was Watson, and now it is AlphaGo. Hopefully very soon the artificial intelligence will leave the human achievements in the “dust”… and without any assumed “soul”. 🙂 (Of course Seoul still stands… if you get my drift ;))
 
There was Deep Blue, then there was Watson, and now it is AlphaGo. Hopefully very soon the artificial intelligence will leave the human achievements in the “dust”… and without any assumed “soul”. 🙂 (Of course Seoul still stands… if you get my drift ;))
I, for one, welcome our new Google powered overlords. 😉
 
I still have a hard time seeing this as being true AI. Yes, the computer improved upon its ‘skills’ on its own, but it only did this because the rules for Go, as well as its learning algorithm, was programmed into it by its human creators. It still lacks true intentionality (all it has is a very advanced derived intentionality). When the programmers say they that they don’t know how the self-development procedure happens, this doesn’t mean that the computer suddenly decided to learn on its own. It was programed to learn and improve upon its skills using guidelines developed by the programmers. The specific learning process wasn’t observed, but that doesn’t mean that the learning process wasn’t planned or programmed by the designers.

This is a huge step for computer programming, but not true AI.
 
“educated itself.” Not so. Trial and error at higher than human speeds is nothing. Nothing was won here.

It is still only a problem solving device with preprogrammed criteria for solving problems. It’s all math. Given X number of possible moves, only Y number of outcomes are possible. That’s it. That it gained information that was translated via visual (name removed by moderator)ut. No big deal.

Ed
 
I still have a hard time seeing this as being true AI. Yes, the computer improved upon its ‘skills’ on its own, but it only did this because the rules for Go, as well as its learning algorithm, was programmed into it by its human creators. It still lacks true intentionality (all it has is a very advanced derived intentionality). When the programmers say they that they don’t know how the self-development procedure happens, this doesn’t mean that the computer suddenly decided to learn on its own. It was programed to learn and improve upon its skills using guidelines developed by the programmers. The specific learning process wasn’t observed, but that doesn’t mean that the learning process wasn’t planned or programmed by the designers.

This is a huge step for computer programming, but not true AI.
I agree with your last sentence, but not with the preceding paragraph. What do you think, how do children learn to become creative? They are educated - or programmed - by their parents, teachers and the whole environment. A solitary “feral” child will never become creative on its own - just like a standalone computer program will not do. There is no fundamental difference whether the program runs on “hardware” or “wetware”. 🙂

The important factor is that the computer program changed itself, it did not follow “mindlessly” the original code pertaining to the game of “go” or “chess” or “Jeopardy”. Of course the original code also included the learning algorithm, just like with humans.

It is not “true” AI, but considering the short time that computers have been around (a few decades!) the progress is nothing short of breathtaking. The funny thing is that there is a random, trial and error method to improve on learning algorithms. The computer randomly changes its own machine code and learns from its own mistakes, something that humans are unable to do. We are unable to modify our neural connections.
“educated itself.” Not so. Trial and error at higher than human speeds is nothing. Nothing was won here.

It is still only a problem solving device with preprogrammed criteria for solving problems. It’s all math. Given X number of possible moves, only Y number of outcomes are possible. That’s it. That it gained information that was translated via visual (name removed by moderator)ut. No big deal.
You really should learn more about the process.
 
“educated itself.” Not so. Trial and error at higher than human speeds is nothing. Nothing was won here.

It is still only a problem solving device with preprogrammed criteria for solving problems. It’s all math. Given X number of possible moves, only Y number of outcomes are possible. That’s it. That it gained information that was translated via visual (name removed by moderator)ut. No big deal.
MORPHEUS: For the longest time, I wouldn’t believe it. But then I saw the fields with my own eyes, watched them liquefy the dead so they could be fed intravenously to the living -
**NEO **(politely): Excuse me, please.
MORPHEUS: Yes, Neo?
NEO: I’ve kept quiet for as long as I could, but I feel a certain need to speak up at this point. The human body is the most inefficient source of energy you could possibly imagine. The efficiency of a power plant at converting thermal energy into electricity decreases as you run the turbines at lower temperatures. If you had any sort of food humans could eat, it would be more efficient to burn it in a furnace than feed it to humans. And now you’re telling me that their food is the bodies of the dead, fed to the living? Haven’t you ever heard of the laws of thermodynamics?
MORPHEUS: Where did you hear about the laws of thermodynamics, Neo?
(Pause.)
NEO: …in the Matrix.
MORPHEUS: The machines tell elegant lies.
(Pause.)
**NEO **(in a small voice): Could I please have a real physics textbook?
MORPHEUS: There is no such thing, Neo. The universe doesn’t run on math.
The joke here is the idea that the universe does not run on math.
 
I agree with your last sentence, but not with the preceding paragraph. What do you think, how do children learn to become creative? They are educated - or programmed - by their parents, teachers and the whole environment. A solitary “feral” child will never become creative on its own - just like a standalone computer program will not do. There is no fundamental difference whether the program runs on “hardware” or “wetware”. 🙂

The important factor is that the computer program changed itself, it did not follow “mindlessly” the original code pertaining to the game of “go” or “chess” or “Jeopardy”. Of course the original code also included the learning algorithm, just like with humans.

It is not “true” AI, but considering the short time that computers have been around (a few decades!) the progress is nothing short of breathtaking. The funny thing is that there is a random, trial and error method to improve on learning algorithms. The computer randomly changes its own machine code and learns from its own mistakes, something that humans are unable to do. We are unable to modify our neural connections.

You really should learn more about the process.
I’ve watched the process in action on video. Military machines with “learning” programs use trial and error to determine how to best traverse terrain that is filled with small pieces of debris, like concrete in a battlefield setting. Once it has stumbled on a successful solution, it is selected. Then it encounters debris in a forest setting: fallen branches, dead bodies, craters from shelling, and uses its built-in learning system to navigate the debris with the least chance of falling. In combat situations, further problems arise when multiple shooters engage the machine.

Ed
 
I’ve watched the process in action on video. Military machines with “learning” programs use trial and error to determine how to best traverse terrain that is filled with small pieces of debris, like concrete in a battlefield setting. Once it has stumbled on a successful solution, it is selected. Then it encounters debris in a forest setting: fallen branches, dead bodies, craters from shelling, and uses its built-in learning system to navigate the debris with the least chance of falling. In combat situations, further problems arise when multiple shooters engage the machine.

Ed
You’ve seen the scenario you describe elsewhere, but how do you know that this new breakthrough is the same thing? Why would it be hailed as a breakthrough in the first place if its just a repeat of what military machines have been doing for years?
This isn’t directed at you in particular - but I get the impression that a lot of users on these forums are self-proclaimed experts in every field of study, dismissing the results of studies after reading a summary posted on the internet. Unless one has analyzed the actual results of any given study, how can one form a definitive opinion?
 
How do you know that a solitary feral child will not be creative? If that’s the case, then how did the first feral humans ‘learn’ creativity, since there was no one there to teach them?

Your assertion that there is no fundamental difference as to hardware or wetware is also highly controversial. This brings into the debate the whole theory of the mind. Needless to say, there are many scientists (many of them atheist) that believe that it is not possible to reduce the mind to ‘hardware’ (look up Thomas Nagel or John Searle).

The thing is, the computer changed itself only because it was programmed to change itself. It would not have done so if it its programmers hadn’t intended it to do so. This brings about the whole idea of intentionality. The fact that the computer followed its code (even if it was a ‘learning’ code) without respect to the actual information that it was learning shows that the computer was in fact mindless. No different from a computer that can ‘learn’ to play chess or tic-tac-toe (the only difference is the complexity of the rules). It’s like saying that, since a calculator can solve 2+2=4, that this means that the calculator knows what ‘2’, ‘+’, ‘=’ and ‘4’ mean. The calculator doesn’t know this – the human user does (read Thomas Nagel’s “What It’s Like to Be a Bat” for a better example). To a calculator, those symbols are meaningless, and are no different than @^@*& It is only when interpreted by its human user that these symbols gain meaning.

The thing is, this pseudo-AI is radically different (on a fundamental level) from true AI. To say that any increase in pseudo-AI technology (no matter how great) can lead to true AI is unsubstantiated to say the least.

Also, it’s quite false to say that humans are unable to modify their neural connections. Our brains do so all the time – our neural connections are not static and change over time.
 
The question is whether the computer can truly grasp the concept of the essence of a thing, which is more than just algorithms or recognizing patterns. It involves the intellect taking on the form of what it is thinking about, which is not something that can be done materially.
 
I think that the reason this is a big announcement is that Go is a very complex game. With earlier games, like checkers or chess, the game was programmed in such a way that all possible moves were programmed into the game by the users. This would be impractical (if not impossible) for a human program (apparently there are more possible moves in Go than atoms in the universe). So the game was programmed to look for ‘patterns’ in the game, and calculate the probability of any given pattern being successful against a different pattern. From what I’ve gathered, the program used two methods – it was shown millions of moves made by human opponents, which it than stored to use in games against itself (where it would ‘learn’ – I presume by trying patterns that differed from the ones already stored and then calculating the probability of this pattern leading to a given output), where it would apply a Monte Carlo Tree Search to find the best (name removed by moderator)ut to reach the programmed desired output.

This is a big deal, since this style of computer ‘learning’ can be applied to other situations as well. But it is fundamentally different from true AI, since it lacks intentionality. It doesn’t know its playing a game. At the most basic level, it uses (name removed by moderator)uts to produce a desired output.
 
“Thou shalt not make a machine in the likeness of a human mind.” – the Dune Universe

:tsktsk:
 
wired.com/2016/03/googles-ai-wins-first-game-historic-match-go-champion/

And not just in one aspect. That the computer won against one of the world’s best GO players is already very exciting news. But the real important part is that the computer educated ITSELF to improve its own game. (By the way, the commentators referred the computer as “he”… ;))
“Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by…” - nature.com/nature/journal/v529/n7587/full/nature16961.html. So, neural networks? Sure, the achievement is nice, but such “educated itself” is by no means unprecedented. Neural networks as such have been invented long ago.
One of the (incorrect) criticisms against AI is that the computer just does what it has been “told”. Well, in this case it went far beyond the base programming.
Nothing in the abstract indicates that the neural network suddenly became a support vector machine or something.
Just like a child, who acts on the instruction of his father: “go and learn, educate yourself”. The credit goes to the child (in this case the computer) and not to the father (or the programmers).
It does? I’m pretty sure father still does get proud in such case. 🙂

And the relationship between software and programmer is not like the relationship between child and father. Father is not a creator of the child. It is closer to the relationship between humans and God. And achievements of humans (including this one) do glorify God.
As a matter of fact, the programmers don’t even know how the self-development procedure happens.
Are you sure…? 🙂

Nothing in the abstract of the paper indicates anything like that. And if there is anything like that in the article you cited, I must have missed it (the closest part seems to be “no one was quite sure how well AlphaGo would perform”).
There was Deep Blue, then there was Watson, and now it is AlphaGo. Hopefully very soon the artificial intelligence will leave the human achievements in the “dust”… and without any assumed “soul”. 🙂 (Of course Seoul still stands… if you get my drift ;))
explainxkcd.com/wiki/index.php/894:_Progeny seems to be relevant here… 🙂
 
You’ve seen the scenario you describe elsewhere, but how do you know that this new breakthrough is the same thing? Why would it be hailed as a breakthrough in the first place if its just a repeat of what military machines have been doing for years?
This isn’t directed at you in particular - but I get the impression that a lot of users on these forums are self-proclaimed experts in every field of study, dismissing the results of studies after reading a summary posted on the internet. Unless one has analyzed the actual results of any given study, how can one form a definitive opinion?
Speaking about myself, I follow military technology on a regular basis. Most of my information does not come from popular web sites or news sources. Opinions don’t matter, only results. Military criteria for any WS or Weapon System are quite strict. If a private research company produces a device that the military assesses as useful in some way and is relatively cheap relative to its functionality, then the Defense Advanced Research Projects Agency gets involved. It relies on information published globally to select or imitate systems or prototypes and either copies the function or gets someone to make it.

A form of computer problem solving exists that is very advanced but it is not required to know more than it needs to know. It can learn and draw on a library of basic principles and create possible solutions. I have seen one simulation where a group of human engineers were asked to design a structure within given parameters. The computer came up with a very different design. As far as I know, this was a capabilities test. Nothing was said that the simulation would actually be built and used.

Ed
 
I also don’t see how this fails to run into Ross’s indeterminacy argument, which (while initially developed to prove the immateriality of the mind) can be deployed against AI and computationalism.
It can be set up as follows:
  1. No physical process is determinate between incompossible functions/forms.
  2. But all processes of a computer are physical process.
  3. Therefore, no process of a computer is determinate between incompossible functions/forms.
  4. If AI is true, then some processes of a computer are determinate between incompossible functions/forms.
  5. Therefore, AI is not true.
A brief defense of each:
  1. To draw from Ross and Kripe, one could simply look at the following function: (x <+> y) = (x + y) if (x + y) < 54; 112.67 otherwise. We can artificially raise or lower the “< 54” as needed such that the value of (x <+> y) always remains (x + y), yet we cannot tell, physically, whether the function (x + y) is being instantiated or (x <+> y), both of which are incompossible functions. There can be an infinite variety of these incompossible functions that a computer could be instantiating. The same applies for complex algorithms, since it can always be comprise of rules that change the outcome if a certain situation obtains, but said situation does not actually obtain. One cannot tell from the electrical impulses which algorithm is being instantiated among the incompossible set.
  2. I’ll just presume most people here will accept this w/o a fight, if that’s okay.
  3. From 1 and 2.
  4. AI, so far as I can tell, is committed to the idea that computers can (at least in principle), be cognitive in the same way, or at least a very similar way, that humans can. This, then, includes reasoning from rules, algorithms, functions, and argument forms. These rules, et al. have to be determinately instantiated, otherwise, it is de facto (meta-)invalid. If our thinking process does not exclude invalid forms/functions, which themselves do not exclude situations in which all the premises are true but the conclusion is false (which is to say, contradictions), then humans cannot reason validly. The primary problem with this, of course, is that it is self-refuting. It uses the rules of reasoning it is trying to undermine to get the desired conclusion.
  5. From 3 and 5.
 
I also don’t see how this fails to run into Ross’s indeterminacy argument, which (while initially developed to prove the immateriality of the mind) can be deployed against AI and computationalism.
It can be set up as follows:
  1. No physical process is determinate between incompossible functions/forms.
  2. But all processes of a computer are physical process.
  3. Therefore, no process of a computer is determinate between incompossible functions/forms.
  4. If AI is true, then some processes of a computer are determinate between incompossible functions/forms.
  5. Therefore, AI is not true.
A brief defense of each:
  1. To draw from Ross and Kripe, one could simply look at the following function: (x <+> y) = (x + y) if (x + y) < 54; 112.67 otherwise. We can artificially raise or lower the “< 54” as needed such that the value of (x <+> y) always remains (x + y), yet we cannot tell, physically, whether the function (x + y) is being instantiated or (x <+> y), both of which are incompossible functions. There can be an infinite variety of these incompossible functions that a computer could be instantiating. The same applies for complex algorithms, since it can always be comprise of rules that change the outcome if a certain situation obtains, but said situation does not actually obtain. One cannot tell from the electrical impulses which algorithm is being instantiated among the incompossible set.
  2. I’ll just presume most people here will accept this w/o a fight, if that’s okay.
  3. From 1 and 2.
  4. AI, so far as I can tell, is committed to the idea that computers can (at least in principle), be cognitive in the same way, or at least a very similar way, that humans can. This, then, includes reasoning from rules, algorithms, functions, and argument forms. These rules, et al. have to be determinately instantiated, otherwise, it is de facto (meta-)invalid. If our thinking process does not exclude invalid forms/functions, which themselves do not exclude situations in which all the premises are true but the conclusion is false (which is to say, contradictions), then humans cannot reason validly. The primary problem with this, of course, is that it is self-refuting. It uses the rules of reasoning it is trying to undermine to get the desired conclusion.
  5. From 3 and 5.
How could you know #2? If there was a ghost in the computer’s shell, how would you be able to tell?

See the comments by Tyrrell McAllister here:
edwardfeser.blogspot.com/2013/10/oerter-on-indeterminacy-and-unknown.html
 
The way I see it, computer and robotic technology is only going to keep progressing, there are no limits to what they wont try or experiment with, the problem is, eventually they are going to either stumble onto true AI ( or some form of AI), or knowingly ‘switch it on’ and by that time, it will be too late to turn it off or take it back.

We are starting to understand how the human brain works, I could see a time in the near future when they try to create a type of fake human mind, using circuit paths for neurons, if their only goal is progressing the technology, they will not even recognize what they are creating.

Not to mention, nearly everything is connected to the internet today and its only going to be worse in the years and decades to come, in theory, the minute an AI ‘mind’ is switched on, it could learn all our weaknesses and how to defeat us in seconds.

We are not there yet, but I really think this is going to be a big issue one day.
 
How could you know #2? If there was a ghost in the computer’s shell, how would you be able to tell?
Not just that, but not all physical processes are deterministic. Moreover, the micro-world is definitely NOT deterministic.
I could see a time in the near future when they try to create a type of fake human mind.
How do you tell the difference between the “real” and the “fake” mind? And not just “mind”, but anything? This is not a trivial problem.
 
Well, Terminator fans. Unless the supposed/imagined AI is connected to the outside world, it will not automatically destroy all us pesky humans because it can be told not to.

A computer has no goals, no emotions, nothing outside of function. It cannot take over our minds. And since we know where it is, we can just blow it up if it malfunctions, since we have a backup on file using holographic or quantum or some yet to be invented memory storage. It’s only a tool.

Ed
 
Status
Not open for further replies.
Back
Top