Humans learn the same way. A child is programmed - taught to play chess.
That’s not what I mean. A computer does not decide to play chess. A child may not decide, but his parents do. A computer does not sit around and decide to learn chess or for another to learn chess. A group of computers do not get together and decide to study the origins of computer science so that they can better understand themselves, or to study their own neural networks so that they can make themselves better, or to entertain themslelves with games.
What is “conscience”? Self-awareness is different. Where is the “conscience” in a feral child?
I types too fast, I meant to write conscious, not conscience. I was referring to self-awareness. A rational understanding of who we are. But, my mistake is somewhat fortuitous, as conscience, ie an inner knowledge of right or wrong, is another key example of how far computers are from having a mind.
What is a “mind”? It is the activity of the brain or the processing unit. Same stuff.
We do not know if it is the “same stuff”. I personally am a dualist, that is my belief, and admittedly why I believe “true AI” will not arrive or be possible. Now, I do not have direct empirical evidence of this, neither do you have direct empirical evidence for saying the activity of the brain is all there is to a mind.
But we need not delve into the philosophical aspects of the mind to know that computers are a long ways from achieving such a thing, and indeed have not really been on the right path. Artificial intelligence has been a field of computer science for a long time. I took my first AI class in the mid 80s, and I studied it more in graduate school in the 90s. It is a moving target, AI is simply applying computers and techiniques to things that programmers had not figured out how to do previously. In the 80s, most of it was game theory. In the 90s, natural language processing was considered part of AI (not to mention voice recognition. Now, SIRI can process most of what we say. That would have seemed very hard to do in the mid 90s to late 90s. So AI keeps advancing. But it is always targeted to specific tasks.
The danger of AI is not that it will overtake us. The danger of AI to our culture is different. You stated above “if it looks like a duck, walks like a duck, tastes like a duck, quacks like a duck… it is very probably a duck”. That is the danger. SIRI does not pass the turing test, but it comes closer, as a commonly available tool, than anything people expected 20 years ago. I think we have assuming these tools, as they advance, are too much like us. Of anthropomorphize computers. That will not be good for our culture, not at all.